Giter Site home page Giter Site logo

cortexsys's People

Contributors

jfsantos avatar joncox123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cortexsys's Issues

Disable gpu support ?

I dont have a dedicated graphics card and i want to disable gpu support and according to the docs I am using false for "useGPU" in % definitions(PRECISION, useGPU, whichThreads, plotOn) defs = definitions(PRECISION, false , [], true); but still in defs useGPU flag is still 1.

Input Of LSTM networks?

I want to use LSTM neural network for classification of images, the input of Lstm must be cell array ?

LSTM Querrry....?

I want to use LSTM for mnsit classification , i have created two cell arrays for input data and labelled data both 0f 5000x1 , X nad y respectively , X have each cell of 400 x 1, which is the 20x20 image ,Is it the right approach??? but the problem is that i have to set T = 5000 and its taking too long to train the network..
Any suggestions

error happened when to train a mapping from input with 100 neurons to output with 4 neurons

The network defined as follows:
input size = 100, output size = 4

layers.af{1} = [];
layers.sz{1} = [input_size 1 1];
layers.typ{1} = defs.TYPES.INPUT;

layers.af{end+1} = ReLU(defs, []);
layers.sz{end+1} = [input_size 1 1];
layers.typ{end+1} = defs.TYPES.FULLY_CONNECTED;

layers.af{end+1} = ReLU(defs, []);
layers.sz{end+1} = [output_size 1 1];
layers.typ{end+1} = defs.TYPES.FULLY_CONNECTED;

if defs.plotOn
    nnShow(23, layers, defs);
end

Error in

Error using  - 
Matrix dimensions must agree.

Error in squaredErrorCostFun (line 2)
    J = (Y.v(:,:,t)-A.v(:,:,t)).^2;

Error in ReLU/cost (line 65)
            J = squaredErrorCostFun(Y, A, m, t);

Error in nnCostFunctionCNN (line 29)
J = nn.l.af{nn.N_l}.cost(Y, nn.A{nn.N_l}, m, 1) + J_s;

Error in
Train_proposal>@(nn,r,newRandGen)nnCostFunctionCNN(nn,r,newRandGen)

Error in gradientDescentAdaDelta (line 69)
    [J, dJdW, dJdB] = feval(f, nn, r, true);

Error in Train_proposal (line 158)
nn = gradientDescentAdaDelta(costFunc, nn, defs, [], [], [], [], 'Training
Entire Network');

Empty function: gather.m

What is the content of the function gather.m? I have been trying to run the example MNIST_Deep_Classifier but it produces the following message error:
**Undefined function or variable 'endfunction'.

Error in gather (line 3)
endfunction**

When I checked the function, this is empty, the only text, after the definition of the function,
is endfunction.

What is the purpouse of the function gather.m?

Thanks in advance.

Stacking of layers

Thanks for the great toolbox!

I would like to build a encoder-LSTM-decoder network using your toolbox.

What would be best practice to train such a network, as to my understanding the input format for the encoder and decoder would differ from the LSTM, which would make it impossible to train the network in one go, correct?

Is it therefore possible to train the encoder and decoder networks separately and use the encoder output as input to the LSTM network for training. And after training stack the layers?

MMX Compile Error - MinGW-w64 v. 1.8 - MATLAB 2017a

Hi all,
I had a problem compiling the mmx files with build_mmx.m Script.

Error Code:
...Cortexsys\nn_core\mmx\mmx.cpp:79:16: error: expected initializer before 'teval'
DWORD _stdcall teval(void* pn)

Fix:
In File mmx.cpp Line 79 add a double underscore to stdcall:
L79: DWORD __stdcall teval(void* pn)

Hope this helps :)

LSTM different input and output lengths

I want to create a network with an LSTM layer, where the number of outputs from the LSTM layer is different from the number of its inputs. Is this possible?

Error while training LSTM

I've been frequently getting this error while training the net. Can someone tell me what the problem might be?

Index exceeds matrix dimensions.

Error in varObj>@(C)full(C(:,r)) (line 45)
v = cellfun(@(C) full(C(:,r)), obj.v, 'UniformOutput', false);

Error in varObj/getmb (line 45)
v = cellfun(@(C) full(C(:,r)), obj.v, 'UniformOutput', false);

Error in nnCostFunctionLSTM (line 13)
Y = varObj(nn.Y.getmb(r), nn.defs, nn.defs.TYPES.OUTPUT);

Error in testSumNumbersGenerator>@(nn,r,newRandGen)nnCostFunctionLSTM(nn,r,newRandGen)

Error in gradientDescentAdaDelta (line 69)
[J, dJdW, dJdB] = feval(f, nn, r, true);

Error in testSumNumbersGenerator (line 137)
nn = gradientDescentAdaDelta(costFunc, nn, defs, [], [], [], [], 'Training Entire Network');

Problem with convolutional layers when using 1-dimensional data

Hello,
I have been trying to create a convolutional autoencoder for use with 1D data. I can define the network, and I can run data through the untrained network, but attempting to train the network produces the following error:

Output argument "gp" (and maybe others) not assigned during call to "LinU/ograd".

Error in nnCostFunctionCNN (line 53)
d{k} = (nn.A{nn.N_l}.v - Y.v).*nn.l.af{nn.N_l}.ograd(nn.A{nn.N_l}.v);

Error in @(nn,r,newRandGen)nnCostFunctionCNN(nn,r,newRandGen)

Error in gradientDescentAdaDelta (line 69)
[J, dJdW, dJdB] = feval(f, nn, r, true);

Any suggestions for a workaround would be appreciated.

Update a trained auto-encoder model with new data

Hello Jonathan

I am Ali a PhD student in University of Tokyo.

Thank you so much for your gear code in Cortexsys project. I want to use your code in my research. And I have a question:

I want to update the trained auto-encoder model with new data. How I can do that in your code?

Can I do that with set the initial weights and biases as the weights and biases that I got from my trained model?

I really appreciate your help.

Regards,
Ali

Slow LSTM testing

So, I'm following the provided example to train a LSTM and I am testing it as shown in the example. I have a variable X2 with the input data and T2 with the target values. Minimal example:

   nn.disableCuda();
   Y = zeros(size(T2));
   nn.A{1} = varObj(X2,defs);
   preallocateMemory(nn, 1, size(X2,3)+2);
   for t=2:size(Y,2)
       feedforwardLSTM(nn, 1, t, false, true);
       Ytest(:,t) = nn.A{end}.v(:,1,t);
   end

The thing is, this takes as much or even more time to run than to train with an equivalent amount of data. And my question is: since I have all the targets and all the input data at the beginning (which is different from the example), is there a way to obtain the output all at once instead of looping through every time step?
Thanks!

Cortexsys-master/octave_wrappers/gather.m seems broken and incomplete

The function mentioned in the title seems broken and incomplete. Running Recurrent_Shakespeare_Generator.m (which is the LSTM/RNN example) results in an error because of it. Any ideas on how to fix gather.m? Thanks in advance!

The entirety of gather.m:

function x = gather (x)

endfunction

Providing cross-validation sets for LSTM

Is there a way to provide cross-validation/test sets for the LSTM training function? I can understand that the function is defined as follows:

gradientDescentAdaDelta(costFunc, nn, defs, Xts, Yts, yts, y, 'Training
Entire Network');

My question is what are the : Xts, Yts, yts and y variables?

Thanks a lot.

Learning Rate for Gradientdescent

Hi Jon,
When I use the normal gradientdescent function and I set alpha to zero (as your comment in that line implies that this will lead to a variable learning rate) the learning rate will actually stay at 0 the entire time and my error for the MNIST stays at ~90%. If I set alpha to a value it will use this as the first value and then decay it according to the equation in alphatau. So it works, but the comment is misleading. There seems to be only the option of using a variable learning rate. A constant seems not implemented.

Anyways, great code :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.