Giter Site home page Giter Site logo

learning_to_execute's People

Contributors

ajaytalati avatar nicolas-ivanov avatar wojciechz avatar wojzaremba avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

learning_to_execute's Issues

Nonsense predictions after training

Hello, everybody,

I'm working on reproducing some of the paper results before starting to play with the code/parameters. However, when running main.lua with default parameters I'm still getting nonsense predictions even after training is finished. Please see the example below.

Am I missing something?

Best regards,
Waldir

$ th main.lua -gpuidx 1 -target_length 6 -target_nesting 3

After a few hours:

Some exemplary predictions for the Test dataset
Input: f=816435;
c=7325;
print((c-(f-(765000+576123))))
Target: 532013.
Prediction: (tt +..
-----------------------------
Input: a=(835459*4)
for x in range(14):a-=518673
f=739545;
print((f-a))
Target: 4659131.
Prediction: njef;117
-----------------------------
Input: d=741596;
print(((d+958900)*10))
Target: 17004960.
Prediction: +1.stfsn+
-----------------------------

epoch=136, Training acc.=3.28%, Validation acc.=3.72%, Test acc.=2.99%, current length=6, current nesting=3, characters per sec.=14168, learning rate=0.001 Training is over.

Memorization task input and target

Hi,

I'm trying to reproduce the results from the paper for the sequence memorization task. May I ask how you constructed the input and target sequences, for the doubling and inverting strategies?

Say for example the task is to memorize/reconstruct the sequence abcd. I've tried to reproduce your results for the different curriculum strategies using, the input x and target y,

x = <sos>abcd<eos><sos>dcba<eos><sos>abcd<eos>000000

y = 000000000000000000<sos>abcd<eos>

So I've added start and end of sequence tokens, <sos> and <eos> and I present the sequence to be memorized, then its inversion and then it's double, followed by padding zeros in the input, x. Thus the target y in this example has 18 zeros, (3 x 6), before the sequence to be memorized is presented.

May I ask if this is how you constructed the data for the memorization task in your paper?

Kind regards,

Aj

PS - I also have used a nn.ClassNLLCriterion criterion after the mask, so maybe this has interfered with reproducing the results in the paper?

Code fails when batch_size = 1?

when batch_size = 1, backward pass fails

/home/sviyer/torch/install/bin/luajit: bad argument #2 to '?' (out of range)
stack traceback:
[C]: at 0x7f032f629700
[C]: in function '__index'
...e/sviyer/torch/install/share/lua/5.1/nngraph/gmodule.lua:324: in function 'neteval'
...e/sviyer/torch/install/share/lua/5.1/nngraph/gmodule.lua:348: in function 'updateGradInput'
/home/sviyer/torch/install/share/lua/5.1/nn/Module.lua:31: in function 'backward'
main.lua:133: in function 'bp'
main.lua:150: in function 'opfunc'
/home/sviyer/torch/install/share/lua/5.1/optim/sgd.lua:43: in function 'sgd'
main.lua:268: in function 'main'
main.lua:312: in main chunk
[C]: in function 'dofile'
...iyer/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00406670

is this a bug in the updateGradInput of MaskedLoss?

The MaskedLoss node has two children, so it should return a table of 2 as its gradInput .... unlike a single tensor that it currently returns? It incorrectly uses the second element of the mini_batch right now and thus works for batch_size > 1?

Ground-truth output is fed into RNN when showing predictions

In the show_predictions() function of main.lstm [https://github.com/wojciechz/learning_to_execute/blob/master/main.lua#L186], the actual ground-truth is fed into RNN:

local tmp = model.rnns[1]:forward({state.data.x[state.pos],
                                          state.data.y[state.pos + 1],
                                          model.s[0]})[2]

While making predictions, should you use argmax(pred_vector) of previous iteration instead of state.data.x[state.pos] to make sure that the algorithm cannot look at the ground-truth before outputting EOS symbol and finishing the generation of output sequence? Otherwise the algorithm is only challenged to predict one letter at a time, while the purpose is to produce the whole sequence. I guess ideally one would want to do a beam search.

I apologize if I misunderstood your intent.

Confused about to_data(code, var, output)

Hi Wojciech,

I wonder if you can help me?

I've been working through the code for a few days, but I'm still confused about line 42

table.insert(x, symbolsManager:get_symbol_idx(output:byte(j)))

in data.lua. It seems as though this line gives the network the 'answer/output' to the program which is the input on line 39? I thought maybe it should be,

for j = 1, #output do
table.insert(x, 0)
table.insert(y, symbolsManager:get_symbol_idx(output:byte(j)))
end

I'm still running experiments to see if the above change makes sense - that is, the code still learns. I could not find any part of the paper which explaines line 42, maybe I over looked something, or did not understand it?

If it is correct is there any chance you could explain a little what line 42 does?

Kind regards,

Aj

Back Propagation

In the article you said that ''Our LSTM has two layers and is unrolled for 50 steps in both experiments. It has 400 cells per layer and its parameters are initialized uniformly in [โˆ’0.08, 0.08]. ". I don't understand how you back propagate and calculate cross-entropy losses in 50 steps, while the input continues and there is no output. Could you help me?

Best test accuracy for hardness (1,1)

Hi,

I was just wondering what the best test accuracy I could expect from the code for hardness (1,1), and for how many epochs I would have to train to achieve this?

I don't seem to be able to get above ~80%, with the default settings?

Thanks for your help ๐Ÿ‘

Regards,

Aj

Does anybody have a problem when executing on GPU?

It does work well on CPU. However it doesn't on GPU.

When I call 'th main.lua' (i.e I use GPU cause default value of gpuidx = 1)

it says

/home/yjy765/torch/install/bin/luajit: /home/yjy765/torch/install/share/lua/5.1/nn/Linear.lua:55: invalid arguments: CudaTensor number CudaTensor number DoubleTensor CudaTensor
expected arguments: CudaTensor~2D [CudaTensor2D] [float] CudaTensor2D CudaTensor2D | _CudaTensor2D_ float [CudaTensor2D] float CudaTensor2D CudaTensor~2D
stack traceback:
[C]: in function 'addmm'
/home/yjy765/torch/install/share/lua/5.1/nn/Linear.lua:55: in function 'func'
...e/yjy765/torch/install/share/lua/5.1/nngraph/gmodule.lua:275: in function 'neteval'
...e/yjy765/torch/install/share/lua/5.1/nngraph/gmodule.lua:310: in function 'forward'
main.lua:127: in function 'fp'
main.lua:166: in function 'opfunc'
/home/yjy765/torch/install/share/lua/5.1/optim/sgd.lua:44: in function 'sgd'
main.lua:292: in function 'main'
main.lua:336: in main chunk
[C]: in function 'dofile'
...y765/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670

I tried to fix but failed.

I wish somebody explain me how to fix it

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.