Giter Site home page Giter Site logo

babi-t2t's People

Contributors

mostafadehghani avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

babi-t2t's Issues

Unable to reproduce joint training results

I'm trying to reproduce the bAbI joint training results in the Universal Transformer paper (UT w/o ACT). My scripts are:

t2t-datagen \
  --t2t_usr_dir=t2t_usr_dir \
  --tmp_dir=babi_data/tmp \
  --data_dir=babi_data/data \
  --problem=babi_qa_sentence_all_tasks_10k

t2t-trainer \
  --t2t_usr_dir=t2t_usr_dir \
  --tmp_dir=babi_data/tmp \
  --data_dir=babi_data/data \
  --output_dir=babi_data/output \
  --problem=babi_qa_sentence_all_tasks_10k \
  --model=babi_universal_transformer \
  --hparams_set=universal_transformer_tiny \
  --train_steps=100000

However, I can't reproduce the results, getting test accuracy around 60% (I didn't train for 100000 steps. But the curve seems already plateau). In particular, I'm not sure about three things:

  1. In transformer_base, the default batch_size is 4096:
    https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/models/transformer.py#L1312
    Unlike T2T's BabiQaConcat which inherits from TextProblem, this repo's BabiQaSentence inherits from Problem. There batch_size_means_tokens is set to False. So 4096 means a quite large batch (4096 * 70 * 12 = 3.4M tokens). I got OOM error with a Titan 1080 ti card. So I changed batch_size to 512.

  2. In a 3 Sep commit, you changed default transformer_ffn_type from sepconv to fc.
    tensorflow/tensor2tensor@e496897
    Should I use sepconv to run the experiments?

  3. T2T code has undergone many changes since this repo was out. Will that impact the results?

What actual batch_size did you use? Did you change any other hparams when running t2t-datagen and t2t-trainer?

It would be very helpful if you could share your flags.txt, flags_t2t.txt and hparams.json files. Attached are mine.

Babi-Decoder

First of all, great work with the Uni Transformer. Then I have some question about the implementation and the hyperparameters:

From the paper and the implementation, I miss how to use the transformer-decoder (if used). Indeed, I have just tried to use the transformer-encoder by concatenating [story, query] and taking the last vector in output from the encoder to predict over the vocabulary. In this way, Uni Transformer and Transformer worked pretty well (Uni Better as u mentioned in the paper). From the implementation, I don't get if you guys use also the decoder. If so how? could you give a bit more details

I couldn't find the hyperparameter used in the experiments, are you going to release it somewhere?

Thanks a lot for your great work, looking forward to hear from you.

Best

Andrea

License?

Thank you for this repository. May we know the license for this work? Is it MIT license?

Babi-T2T decoder?

First of all, great work with the Uni Transformer. Then I have some question about the implementation and the hyperparameters:

  • From the paper and the implementation, I miss how to use the transformer-decoder (if used). Indeed, I have just tried to use the transformer-encoder by concatenating [story, query] and taking the last vector in output from the encoder to predict over the vocabulary. In this way, Uni Transformer and Transformer worked pretty well (Uni Better as u mentioned in the paper). From the implementation, I don't get if you guys use also the decoder. If so how? could you give a bit more details

  • I couldn't find the hyperparameter used in the experiments, are you going to release it somewhere?

Thanks a lot for your great work, looking forward to hear from you.

Best

Andrea

AttributeError: 'HParams' object has no attribute 'modality'

I’m just trying to run the example in the readme but it’s giving the error. How can we fix that? Thank you.

t2t-trainer \
  --t2t_usr_dir=~/bAbI-T2T/t2t_usr_dir \
  --tmp_dir=~/babi_data/tmp \
  --data_dir=~/babi_data/data \
  --output_dir=~/babi_data/output \
  --problem=babi_qa_sentence_task1_10k \
  --model=babi_transformer \
  --hparams_set=transformer_tiny \
  --train_steps=100000
AttributeError: 'HParams' object has no attribute 'modality' 

Unable to generate data

Using command:

t2t-datagen --problem=babi_qa_sentence_task15_10k --t2t_usr_dir=t2t_usr_dir --data_dir=t2t_data --tmp_dir=t2t_data/tmp

Ends up with:

Traceback (most recent call last):
  File "/home/ubuntu/.local/bin/t2t-datagen", line 27, in <module>
    tf.app.run()
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "/home/ubuntu/.local/bin/t2t-datagen", line 23, in main
    t2t_datagen.main(argv)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/bin/t2t_datagen.py", line 182, in main
    generate_data_for_registered_problem(problem)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/bin/t2t_datagen.py", line 232, in generate_data_for_registered_problem
    problem.generate_data(data_dir, tmp_dir, task_id)
  File "/home/ubuntu/bAbI-T2T/t2t_usr_dir/babi_qa.py", line 202, in generate_data
    encoder = self.get_or_create_vocab(data_dir, tmp_dir)
AttributeError: 'BabiQaSentenceTask15_10k' object has no attribute 'get_or_create_vocab'

Unable to decode

Using command:

t2t-decoder \
>   --t2t_usr_dir=~/bAbI-T2T/t2t_usr_dir \
>   --data_dir=~/babi_data/data \
>   --output_dir=~/babi_data/outpu \
>   --problem=babi_qa_sentence_task1_10k \
>   --model=babi_transformer \
>   --hparams_set=universal_transformer_tiny

Ends up with:

INFO:tensorflow:Importing user module t2t_usr_dir from path /home/ubuntu/bAbI-T2T
WARNING:tensorflow:From /home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/utils/trainer_lib.py:165: RunConfig.__init__ (from tensorflow.contrib.learn.python.learn.estimators.run_config) is deprecated and will be removed in a future version.
Instructions for updating:
When switching to tf.estimator.Estimator, use tf.estimator.RunConfig instead.
INFO:tensorflow:schedule=continuous_train_and_eval
INFO:tensorflow:worker_gpu=1
INFO:tensorflow:sync=False
WARNING:tensorflow:Schedule=continuous_train_and_eval. Assuming that training is running on a single machine.
INFO:tensorflow:datashard_devices: ['gpu:0']
INFO:tensorflow:caching_devices: None
INFO:tensorflow:ps_devices: ['gpu:0']
INFO:tensorflow:Using config: {'_num_ps_replicas': 0, '_task_id': 0, 'use_tpu': False, '_tf_config': gpu_options {
  per_process_gpu_memory_fraction: 1.0
}
, '_task_type': None, '_tf_random_seed': None, 't2t_device_info': {'num_async_replicas': 1}, '_keep_checkpoint_max': 20, '_model_dir': '/home/ubuntu/babi_data/outpu', '_evaluation_master': '', '_master': '', '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fa978becf60>, '_device_fn': None, '_num_worker_replicas': 0, 'data_parallelism': <tensor2tensor.utils.expert_utils.Parallelism object at 0x7fa985a760b8>, '_save_checkpoints_steps': 1000, '_save_checkpoints_secs': None, '_keep_checkpoint_every_n_hours': 10000, '_save_summary_steps': 100, '_train_distribute': None, '_is_chief': True, '_log_step_count_steps': 100, '_environment': 'local', '_session_config': gpu_options {
  per_process_gpu_memory_fraction: 0.95
}
allow_soft_placement: true
graph_options {
  optimizer_options {
  }
}
}
WARNING:tensorflow:Estimator's model_fn (<function T2TModel.make_estimator_model_fn.<locals>.wrapping_model_fn at 0x7fa985a81378>) includes params argument, but params are not passed to Estimator.
INFO:tensorflow:Performing local inference from dataset for babi_qa_sentence_task1_10k.
INFO:tensorflow:Reading data files from /home/ubuntu/babi_data/data/babi_qa_en-10k_qa1_single-supporting-fact-dev*
INFO:tensorflow:partition: 0 num_data_files: 1
INFO:tensorflow:Tensor("ExpandDims_1:0", shape=(1, 12, 1), dtype=int64, device=/device:CPU:0)
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Setting T2TModel mode to 'infer'
INFO:tensorflow:Setting hparams.relu_dropout to 0.0
INFO:tensorflow:Setting hparams.attention_dropout to 0.0
INFO:tensorflow:Setting hparams.symbol_dropout to 0.0
INFO:tensorflow:Setting hparams.layer_prepostprocess_dropout to 0.0
INFO:tensorflow:Setting hparams.dropout to 0.0
Traceback (most recent call last):
  File "/home/ubuntu/.local/bin/t2t-decoder", line 16, in <module>
    tf.app.run()
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "/home/ubuntu/.local/bin/t2t-decoder", line 12, in main
    t2t_decoder.main(argv)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/bin/t2t_decoder.py", line 190, in main
    decode(estimator, hp, decode_hp)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/bin/t2t_decoder.py", line 103, in decode
    dataset_split="test" if FLAGS.eval_use_test_set else None)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/utils/decoding.py", line 184, in decode_from_dataset
    for num_predictions, prediction in enumerate(predictions):
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/estimator/estimator.py", line 533, in predict
    features, None, model_fn_lib.ModeKeys.PREDICT, self.config)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/estimator/estimator.py", line 1107, in _call_model_fn
    model_fn_results = self._model_fn(features=features, **kwargs)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/utils/t2t_model.py", line 1155, in wrapping_model_fn
    use_tpu=use_tpu)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/utils/t2t_model.py", line 1200, in estimator_model_fn
    return model.estimator_spec_predict(features, use_tpu=use_tpu)
TypeError: estimator_spec_predict() got an unexpected keyword argument 'use_tpu'

Adding use_tpu to https://github.com/MostafaDehghani/bAbI-T2T/blob/master/t2t_usr_dir/babi_transformer.py#L37 in a manner of:

def estimator_spec_predict(self, features, use_tpu=False):

Generates another error of:

INFO:tensorflow:Importing user module t2t_usr_dir from path /home/ubuntu/bAbI-T2T
WARNING:tensorflow:From /home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/utils/trainer_lib.py:165: RunConfig.__init__ (from tensorflow.contrib.learn.python.learn.estimators.run_config) is deprecated and will be removed in a future version.
Instructions for updating:
When switching to tf.estimator.Estimator, use tf.estimator.RunConfig instead.
INFO:tensorflow:schedule=continuous_train_and_eval
INFO:tensorflow:worker_gpu=1
INFO:tensorflow:sync=False
WARNING:tensorflow:Schedule=continuous_train_and_eval. Assuming that training is running on a single machine.
INFO:tensorflow:datashard_devices: ['gpu:0']
INFO:tensorflow:caching_devices: None
INFO:tensorflow:ps_devices: ['gpu:0']
INFO:tensorflow:Using config: {'_save_summary_steps': 100, '_evaluation_master': '', '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f85d36ec8d0>, '_task_type': None, '_num_worker_replicas': 0, '_is_chief': True, '_environment': 'local', '_session_config': gpu_options {
  per_process_gpu_memory_fraction: 0.95
}
allow_soft_placement: true
graph_options {
  optimizer_options {
  }
}
, '_tf_random_seed': None, '_num_ps_replicas': 0, '_keep_checkpoint_max': 20, '_tf_config': gpu_options {
  per_process_gpu_memory_fraction: 1.0
}
, 'data_parallelism': <tensor2tensor.utils.expert_utils.Parallelism object at 0x7f85eb9222e8>, '_keep_checkpoint_every_n_hours': 10000, '_save_checkpoints_secs': None, '_device_fn': None, '_log_step_count_steps': 100, '_master': '', '_model_dir': '/home/ubuntu/babi_data/outpu', 't2t_device_info': {'num_async_replicas': 1}, '_save_checkpoints_steps': 1000, '_task_id': 0, 'use_tpu': False, '_train_distribute': None}
WARNING:tensorflow:Estimator's model_fn (<function T2TModel.make_estimator_model_fn.<locals>.wrapping_model_fn at 0x7f85dc766840>) includes params argument, but params are not passed to Estimator.
INFO:tensorflow:Performing local inference from dataset for babi_qa_sentence_task1_10k.
INFO:tensorflow:Reading data files from /home/ubuntu/babi_data/data/babi_qa_en-10k_qa1_single-supporting-fact-dev*
INFO:tensorflow:partition: 0 num_data_files: 1
INFO:tensorflow:Tensor("ExpandDims_1:0", shape=(1, 12, 1), dtype=int64, device=/device:CPU:0)
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Setting T2TModel mode to 'infer'
INFO:tensorflow:Setting hparams.symbol_dropout to 0.0
INFO:tensorflow:Setting hparams.dropout to 0.0
INFO:tensorflow:Setting hparams.attention_dropout to 0.0
INFO:tensorflow:Setting hparams.layer_prepostprocess_dropout to 0.0
INFO:tensorflow:Setting hparams.relu_dropout to 0.0
INFO:tensorflow:Greedy Decoding
Traceback (most recent call last):
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1589, in _create_c_op
    c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension must be 5 but is 4 for 'babi_transformer/body/parallel_0/body/encoder/layer_0/self_attention/multihead_attention/split_heads/transpose' (op: 'Transpose') with input shapes: [?,71,12,4,32], [4].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/.local/bin/t2t-decoder", line 16, in <module>
    tf.app.run()
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "/home/ubuntu/.local/bin/t2t-decoder", line 12, in main
    t2t_decoder.main(argv)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/bin/t2t_decoder.py", line 190, in main
    decode(estimator, hp, decode_hp)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/bin/t2t_decoder.py", line 103, in decode
    dataset_split="test" if FLAGS.eval_use_test_set else None)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/utils/decoding.py", line 184, in decode_from_dataset
    for num_predictions, prediction in enumerate(predictions):
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/estimator/estimator.py", line 533, in predict
    features, None, model_fn_lib.ModeKeys.PREDICT, self.config)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/estimator/estimator.py", line 1107, in _call_model_fn
    model_fn_results = self._model_fn(features=features, **kwargs)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/utils/t2t_model.py", line 1155, in wrapping_model_fn
    use_tpu=use_tpu)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/utils/t2t_model.py", line 1200, in estimator_model_fn
    return model.estimator_spec_predict(features, use_tpu=use_tpu)
  File "/home/ubuntu/bAbI-T2T/t2t_usr_dir/babi_transformer.py", line 43, in estimator_spec_predict
    alpha=decode_hparams.alpha, decode_length=decode_hparams.extra_length)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/utils/t2t_model.py", line 593, in infer
    results = self._greedy_infer(features, decode_length, use_tpu)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/models/transformer.py", line 226, in _greedy_infer
    self._fast_decode(features, decode_length))
  File "/home/ubuntu/bAbI-T2T/t2t_usr_dir/babi_transformer.py", line 425, in _fast_decode
    features["target_space_id"], hparams)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/utils/expert_utils.py", line 231, in __call__
    outputs.append(fns[i](*my_args[i], **my_kwargs[i]))
  File "/home/ubuntu/bAbI-T2T/t2t_usr_dir/babi_transformer.py", line 600, in encode
    save_weights_to=self.attention_weights)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/models/transformer.py", line 1220, in transformer_encoder
    vars_3d=hparams.get("attention_variables_3d"))
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/layers/common_attention.py", line 2926, in multihead_attention
    q = split_heads(q, num_heads)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/utils/expert_utils.py", line 58, in decorated
    return f(*args, **kwargs)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensor2tensor/layers/common_attention.py", line 1109, in split_heads
    return tf.transpose(split_last_dimension(x, num_heads), [0, 2, 1, 3])
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1408, in transpose
    ret = transpose_fn(a, perm, name=name)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 8636, in transpose
    "Transpose", x=x, perm=perm, name=name)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3414, in create_op
    op_def=op_def)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1756, in __init__
    control_input_ops)
  File "/home/ubuntu/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1592, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimension must be 5 but is 4 for 'babi_transformer/body/parallel_0/body/encoder/layer_0/self_attention/multihead_attention/split_heads/transpose' (op: 'Transpose') with input shapes: [?,71,12,4,32], [4].

Configuration:

tensor2tensor (1.6.6)
tensorflow-gpu (1.9.0)
bAbI-T2T repo up to date

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.