Giter Site home page Giter Site logo

nn's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nn's Issues

;

Looks like someone put spare ;s in the docs.. Lots of them.

Change in model:forward output dimension while batch size is 1

Hi guys,

Previously, I used to get output of 1xNum-of-classes dimension with model:forward while I use to have batch-size = 1. However, after current updates I use to get output of only Num-of-classes dimension with batch-size = 1. Just want to make sure that if this is what we want or it's un-welcomed change.

Code to reproduce above behaviour- https://github.com/rudrapoudel/hello_ml/blob/master/dnn/dnn.lua (outputs variable at line 161, num-of-classes is 6 here). You can execute like this,

th dnn.lua -batch_size 1
th dnn.lua -batch_size 2

Module:findModules() should support inheritance

The current implementation of Module:findModules(typename) will not find instances of classes that inherit from typename. If supported, one could easily enumerate all modules of a model by model:findModules('nn.Module'). Moreover, adapters of existing modules would still be found easily.

The proposed change is easy to do by replacing

local mod_type = torch.typename(self)
if mod_type == typename then

with

if torch.isTypeOf(self, typename) then

ClassNLLCriterion - error in forward

When dealing with cuda ctitererion i get an error in ClassNLLCriterion whilr executing that line:
input.nn.ClassNLLCriterion_updateOutput(self, input, target)

The error is -
/usr/local/bin/luajit: /usr/local/share/lua/5.1/nn/ClassNLLCriterion.lua:15: attempt to call field 'ClassNLLCriterion_updateOutput' (a nil value)

Removing a layer to specialize an autoencoder

I am trying to specialize an autoencoder. Is there a way to delete layers from the encoder to then attach the specialization layers in their place? Or do I have to copy the weights out of the encoder into a new model and then add layers onto the new model?

Does anyone have an example of doing this?

Thanks,

David

LookupTable.lua:119 variable k is always nill

j does not exists i is the iterator variable.

@@ -115,9 +115,9 @@
function LookupTable:accUpdateGradParameters(input, gradOutput, lr)
if input:dim() == 1 then
for i=1,input:size(1) do

  •     local k = input[j]
    
  •     local k = input[i]
      local kscale = self:scaleUpdateByKey(k)
    
  •     self.weight:select(1, input[i]):add(-lr*kscale, gradOutput:select(1, i))
    
  •     self.weight:select(1, k):add(-lr*kscale, gradOutput:select(1, i))
    
    end
    elseif input:dim() == 2 then
    for i=1,input:size(1) do

SelectTable cannot resize

I don't have time to fix it today, though I'll have a look at it tomorrow. If the input size to SelectTable changes, then the gradInput is incorrect:

require 'nn'
st = nn.SelectTable(1)
input = {torch.rand(1)}
gradOutput = torch.rand(1)
st:forward(input)
print(st:backward(input, gradOutput))
input = {torch.rand(1), torch.rand(1)}
print(st:backward(input, gradOutput))

This returns:

{
  1 : DoubleTensor - size: 1
}
{
  1 : DoubleTensor - size: 1
}

The second print should have been a table of length 2. Looking at the code, it looks like it doesn't adapt to input size changes... However, this should be easy enough to fix (I just don't have time tonight).

Lack of OpenMP support in ParallelTable

It is interesting to find that ParallelTable itself is not parallel-able in multi-core computation at the moment. It seems the implementation would be a bit different to other modules in nn, for the need of executing Lua code of the sub-modules. I'll be happy to work on it if it's worthwhile.

Spatial Adaptive Max Pooling

@soumith We have implemented both c and cuda versions of Spatial Adaptive Max Pooling, which is used in SPP paper.
Basicaly for a given output size and for any input it always returns a tensor of this output size.
We can put it to inn or to nn and cunn, what do you think ?

nn.ConcatTable recursive solution

My PR #28 didn't include a recursive solution for nested tables. Work keeps piling up on me, but I will try and get around to fixing this soon. (otherwise I'd be happy if someone else got around to it before me).

ConcatTable() failed to response when calling getParameters() function.

When I am using concattable module, I got a weird error. If I add two complex networks in the table, the getParameter() funcion always fail. Could U guys help and figure out what's wrong ?

tmp = nn.Sequential()
tmp:add(nn.SpatialConvolutionMM(3,3,3,3))
tmp:add(nn.SpatialConvolutionMM(3,3,3,3))

model1 = nn.ConcatTable()
model1:add(tmp)
model1:add(nn.Identity())

model2 = nn.ConcatTable()
model2:add(tmp)
model2:add(tmp)

w1,_ = model1:getParameters()
w2,_ = model2:getParameters()

See that model1 works fine, but model2 is not.

Problem with SpatialAveragePooling

Hi,

While using SpatialAveragePooling I have the following problem:

/usr/local/share/lua/5.1/nn/SpatialAveragePooling.lua:13: attempt to call field 'SpatialAveragePooling_updateOutput' (a nil value)

Why is that?

nn.Power -> cannot accept inputs with 0.

Hi,

nn.Power divides by the input in the function 'updateGradInput'. if the input contains 0, the value obtained will be nan. for instance:
f(x) = x^7
f'(x = 0) = 0 != nan.

Best,

Michael.

ClassNLLCriterion - latest commit has broken batch mode

Hi there,

it seems the last commit in ClassNLLCritetrion broken the support in batch mode.
i have inputs :
output = torch.Cuda Tensor 64 x 10319
targets = torch.Cuda Tensor 64
err = criterion:forward(output, targets)

i get the following error :
/usr/local/bin/luajit: bad argument #2 to '?' (sizes do not match)
stack traceback:
[C]: at 0x7fac8dcbc940
[C]: in function '__newindex'
/usr/local/share/lua/5.1/nn/ClassNLLCriterion.lua:16: in function 'forward'
../train.lua:156: in function 'opfunc'
/usr/local/share/lua/5.1/optim/sgd.lua:43: in function 'optimMethod'
../train.lua:185: in function 'train'
doall.lua:51: in main chunk
[C]: in function 'dofile'
/usr/local/lib/luarocks/rocks/trepl/scm-1/bin/th:129: in main chunk
[C]: at 0x00405e60

Problem compiling without openmp support

The commit "speedup and optimizations for SparseLinear" (a38407a) includes a call to omp_get_thread_num() which is an undefined symbol when not using openmp.

This problem manifests as confusing error messages down the line. A fresh install with clang on osx will fail to load nn with these error messages:

th> require 'nn'
/Users/andy/local/pkg/torch7/share/lua/5.1/trepl/init.lua:318: loop or previous error loading module 'nn'
stack traceback:
    [C]: in function 'error'
    /Users/andy/local/pkg/torch7/share/lua/5.1/trepl/init.lua:318: in function 'require'
    [string "require 'nn'"]:1: in main chunk
    [C]: in function 'xpcall'
    /Users/andy/local/pkg/torch7/share/lua/5.1/trepl/init.lua:587: in function 'repl'
    ...y/local/pkg/torch7/lib/luarocks/rocks/trepl/scm-1/bin/th:185: in main chunk
    [C]: at 0x0104d4f060
th> require 'libnn'
/Users/andy/local/pkg/torch7/share/lua/5.1/trepl/init.lua:318: .../andy/local/pkg/torch7/share/lua/5.1/luarocks/loader.lua:117: error loading module 'libnn' from file '/Users/andy/local/pkg/torch7/lib/lua/5.1/libnn.so':
    dlopen(/Users/andy/local/pkg/torch7/lib/lua/5.1/libnn.so, 6): Symbol not found: _omp_get_thread_num
  Referenced from: /Users/andy/local/pkg/torch7/lib/lua/5.1/libnn.so
  Expected in: flat namespace
 in /Users/andy/local/pkg/torch7/lib/lua/5.1/libnn.so

Inconsistency with criterion.sizeAverage semantics

In ClassNLLCriterion, if self.sizeAverage is true, then the loss and gradient get divided by the number of examples in the minibatch.

However, MultiMarginCriterion (both in nn and cunn), if self.sizeAverage is true, the loss and gradient get divided by the number of target labels.

I'm wondering what the convention is supposed to be. Dividing by the size of the minibatch makes a lot more sense to me. Otherwise the learning rate will interact with the minibatch size in weird ways. Also, in MultiMarginCriterion.cu, the nframe variable is declared, but never used, so this makes me think that we should have been dividing by nframe, rather than dim.

This sounds like a bug with MultiMarginCriterion. I can make a PR to fix it, but I want to make sure I'm interpreting things correctly first.

SpatialConvolutionMM cuda/cpu interoperability broken

There are two caches in SpatialConvolutionMM: fgradInput and finput. The problem is that they are used in a different way in the 'nn' and 'cunn' C-implementations. Specifically, the cunn-implementation expects to find 1s in fgradInput while the nn-implementation stores arbitrary data in it. Thus, converting the module from CPU to CUDA makes it produce garbage in a very evil way because nothing crashes. For example

require 'nn'
require 'cunn'
require 'cutorch'

torch.setdefaulttensortype('torch.FloatTensor')

T = torch.Tensor(10,10,10):zero()
M = nn.SpatialConvolutionMM(10,1,5,5,1,1,0)

F = M:forward(T)
M:backward(T,F)

--M.fgradInput = torch.Tensor()   --UNCOMMENT ME TO FIX THIS

M=M:cuda()
C = M(T:cuda())

assert(math.abs( torch.norm(F:cuda()-C)) < 1e-5)

The proposed fix is to clear the caches at type conversion:

function SpatialConvolutionMM:type(type)
   self.finput = torch.Tensor()
   self.fgradInput = torch.Tensor()
   return parent.type(self,type)
end

nn.View behaviour has changed

@Atcold has noted that his old models didn't work anymore when the batch size is equal to 1, the batch dimension being squeezed out.
Since commits b94b457 and 6b0a15f the behaviour of View has changed when the batch size is equal to 1 and the user does not specify the number of input dimensions.

For exemple

require 'nn'

m = nn.View(18)
m:forward(torch.rand(1,2,3,3))

outputs a tensor of size 18, not considering the batch dimension as was done before those commits. On the other hand, if the user set

require 'nn'
m = nn.View(18):setNumInputDims(3)
m:forward(torch.rand(1,2,3,3))

which gives as output a 1x18 tensor, as expected.

I don't think the current behaviour to be a problem, but it is indeed not backward compatible in the case of a batch size of 1. Do you think it should be changed to make it consistent with previous View ?

MarginRankingCriterion does not change type when casted to Cuda

When MarginRankingCriterion is transformed into cuda, its interior parameters do not change type from Float/DoubleTensor to CudaTensor. Could be quickly fixed by providing a type function:

function MarginRankingCriterion:type(type)
self.gradInput[1] = self.gradInput[1]:type(type)
self.gradInput[2] = self.gradInput[2]:type(type)
return parent.type(self, type)
end

(Sorry, can't fix it myself right now)

type conversion of nn.L1HingeEmbeddingCriterion() to cuda

When L1HingeEmbeddingCriterion is transformed into cuda, its interior parameters do not change type from FloatTensor to CudaTensor.

Example:

th> criterion = nn.L1HingeEmbeddingCriterion()
th> criterion.gradInput
{
1 : FloatTensor - empty
2 : FloatTensor - empty
}
th> criterion = criterion:cuda()
th> criterion.gradInput
{
1 : FloatTensor - empty
2 : FloatTensor - empty
}

This behavior forbids the correct functioning of criterion:backward for CudaTensors, because it results in an error:
...h/install/share/lua/5.1/nn/L1HingeEmbeddingCriterion.lua:26: bad argument #1 to 'resizeAs' (torch.FloatTensor expected, got torch.CudaTensor)

I think this has something to do with the way that this criterion is constructed, and that it has in fact two gradInputs.

Preserve sharing semantics when typecasting a network [for example to :cuda()]

Currently, if one creates a network (say a siamese network) where some modules in the network share weights with one another, then once the network is typed to another type like :cuda or :float, then the sharing is untied.

Not only is this confusing, but also undocumented.

It is also fixable pretty simply.

I will prepare a patch for this.

fbcunn -> cunn/nn

I am thinking we should move some stuff from fbcunn to cunn/nn. We could add doc, unit tests, C code, etc. What are your thoughts?

Problem with nn.View since last pull

Hi,

Since the last pull (of nn and torch), I'm unable to use -1 arguments in nn.View in most situations. For instance, the following code fails:

require 'nn'
a = nn.View(-1, 2)
x = torch.randn(4)
a:forward(x)

with the following error

.../mathieu/torch_sc/install/share/lua/5.1/torch/Tensor.lua:455: Wrong size for view. Input size: 4. Output size: -2x-1x2
stack traceback:
[C]: in function 'error'
.../mathieu/torch_sc/install/share/lua/5.1/torch/Tensor.lua:455: in function 'view'
/home/mathieu/torch_sc/install/share/lua/5.1/nn/View.lua:72: in function 'forward'
[string "a:forward(x)"]:1: in main chunk
[C]: in function 'xpcall'
/home/mathieu/torch_sc/install/share/lua/5.1/trepl/init.lua:588: in function 'repl'
...u/torch_sc/install/lib/luarocks/rocks/trepl/scm-1/bin/th:185: in main chunk
[C]: at 0x00405910

x:view(-1, 2) works well, though.

Reduce network size after training

After I train a net, it grows in size and that is especially easy to notice with large batches. I would like to reduce the size to original to only include weights before saving, or at least to non-batch size. How can I do it?

Also, it would be a great idea, to make a standart function for preparing a net for saving that would include at least that thing and removal of descriptors.

model file too big after batch learning

Probably, This issue caused by Module#output and Module#gradInput.

require 'nn'

function batch_learning(model, criterion, x, y)
   local z = model:forward(x)
   local df_do = torch.Tensor(z:size(1), y:size(2)):zero()
   for i = 1, z:size(1) do
      local f = criterion:forward(z[i], y[i])
      df_do[i]:copy(criterion:backward(z[i], y[i]))
   end
   model:backward(x, df_do)
end
function online_learning(model, criterion, x, y)
   for i = 1, x:size(1) do
      local z = model:forward(x[i])
      local f = criterion:forward(z, y[i])
      model:backward(x[i], criterion:backward(z, y[i]))
   end
end
function model_file_too_big()
   local EXAMPLES = 10000 -- 1000000
   local FEATS = 1000
   local x = torch.Tensor(EXAMPLES, FEATS):uniform()
   local y = torch.Tensor(EXAMPLES, 1):uniform()
   local criterion = nn.MSECriterion()

   local model = nn.Sequential()
   model:add(nn.Linear(x:size(2), 1))
   model:add(nn.Reshape(1))
   model:add(nn.Sigmoid())

   online_learning(model, criterion, x, y)
   torch.save("online.model", model)

   batch_learning(model, criterion, x, y)
   torch.save("batch.model", model)
end
model_file_too_big()
% th model_too_big.lua
% ls -laS
total 78424
-rw-rw-r-- 1 nagadomi nagadomi 80257915 Nov 26 01:15 batch.model
-rw-rw-r-- 1 nagadomi nagadomi    25891 Nov 26 01:15 online.model

Problematic output in nn.Reshape

require โ€œnnโ€
input = torch.Tensor(2, 8):fill(4)
m = nn.Reshape(16)
print(m:forward(input)) -- Dimension 16, as expected.

input2 = torch.Tensor(1, 16):fill(4)
print(m:forward(input2)) -- Dimension 1x16, unexpectedly

model forward error when input is a randomly generated tensor

when using randomly generated tensor as an input for network,
it'll assert this kind of error.

$ th
> require 'nn'
true    
                                                                      [0.0233s]
> model = nn.Sequential()
                                                                      [0.0001s]
> model:add(nn.SpatialConvolutionMM(3,5,3,3))
nn.Sequential {
  [input -> (1) -> output]
  (1): nn.SpatialConvolutionMM
}
                                                                      [0.0003s]
> img_in = torch.rand(3,20,20):type('torch.FloatTensor')
                                                                      [0.0003s]
> fv_out = model:forward(img_in)
/usr/local/share/lua/5.1/nn/Sequential.lua:37: bad argument #1 (field finput is not a torch.FloatTensor)
stack traceback:
    [C]: in function 'updateOutput'
    /usr/local/share/lua/5.1/nn/Sequential.lua:37: in function 'forward'
    [string "fv_out = model:forward(img_in)"]:1: in main chunk
    [C]: in function 'xpcall'
    /usr/local/share/lua/5.1/trepl/init.lua:696: in function 'repl'
    /usr/local/lib/luarocks/rocks/trepl/scm-1/bin/th:121: in main chunk
    [C]: at 0x00404480  
                                                                      [0.0002s]

According to Clement Farabet's torch/tutorials, the tensor type could be transferred using dst = src:type('torch.TypeTensor') in which Type could be 'Float', 'Double', 'Byte', 'Int', etc.
However, this line seems doesn't work at this case.
and if use print(type(img_in)) we got userdata, the same result as the type of torch.rand(..,..,..)

In addition, there is one statement way to avoid this error message (using the same model).

> temp = torch.rand(3,20,20)
                                                                      [0.0003s]
> temp = temp:type('torch.FloatTensor')
                                                                      [0.0002s]
> img_in = torch.Tensor(3,20,20):copy(temp)
                                                                      [0.0002s]
> fv_out = model:forward(img_in)

We can see that the initial types of temp and img_in are different,
userdata for former variable and torch.FloatTensor for the latter.

In conclusion, the dataType transfer line `dst = src:Type('..') might be limited or should be revised to stick to its original function.

nn.L1Penalty

self.provideOutput=true should be the default in the constructor (its safer that way).
Also it should perhaps be an argument in the constructor.

Module method for custom parameters initialization

Hi!

Do you think that an additionnal method for custom parameters initialization would be relevant for modules such as "nn.Linear" or "nn.SpatialConvolution"?
I noticed that weights and bias are initialized using an uniform(-r,r) distribution by default. However, this is sometimes not the best choice.
A dedicated method, taking a custom initialization function as input, would be cleaner than directly manipulating the "module.weight" tensor.

SpatialFullConvolutionMap does weird things

Below, SpatialConvolutionMap is reported for comparison.

th> SFC = nn.SpatialFullConvolutionMap(nn.tables.oneToOne(3), 5, 5)                                                                    
                                                                      [0.0005s]
th> SC = nn.SpatialConvolutionMap(nn.tables.oneToOne(3), 5, 5)                                                                         
                                                                      [0.0004s]
th> #SFC:forward(torch.Tensor(3,20,20))                                                                                                

  3
 24
 24
[torch.LongStorage of size 3]

                                                                      [0.0003s]
th> #SC:forward(torch.Tensor(3,20,20))                                                                                                 

  3
 16
 16
[torch.LongStorage of size 3]

Shouldn't the first be 3x20x20 instead?

                                                                      [0.0003s]
th> SFCS = nn.SpatialFullConvolutionMap(nn.tables.oneToOne(3), 5, 5, 4, 4)                                                             
                                                                      [0.0004s]
th> SCS = nn.SpatialConvolutionMap(nn.tables.oneToOne(3), 5, 5, 4, 4)                                                                  
                                                                      [0.0003s]
th> #SFCS:forward(torch.Tensor(3,20,20))                                                                                               

  3
 81
 81
[torch.LongStorage of size 3]

                                                                      [0.0005s]
th> #SCS:forward(torch.Tensor(3,20,20))                                                                                                

 3
 4
 4
[torch.LongStorage of size 3]

                                                                      [0.0003s]

What is 81x81?

Several *Table modules lack type() method

Several *Table modules (e.g. JoinTable or CAddTable) store tables in their gradInput member variable but define no appropriate type(type) method. This leads to the issue that a model cannot change type (e.g. float<->cuda) once a backpropagation has been performed. The proposed fix is to define the method appropriately, e.g.

function CAddTable:type(type)
   parent.type(self, type)
   self.gradInput = {}
end

Need help for backward training

Hi, all~

Currently, I'm plug nn modules through Sequential container.
My NN script is adapted from @soumith /galaxyzoo for CUDA usage
Everything works fine, however, this error message is quite confusing,
I've checked Sequential.lua (while found that model:backward(input,df_do:cud() is related to Module.la and Power.lua later).
There are two identical code snippets in my scripts and one of them works fine, another don't.
Can anyone help me figure out this ?

BTW, some functions in Module.lua just do nothing about input parameters, are those parameters
cleared when zeroParameters() is called?

/usr/local/bin/luajit: /usr/local/share/lua/5.1/nn/Power.lua:18: bad argument #1 to 'copy' (sizes do not match)                    
stack traceback:
    [C]: in function 'copy'
    /usr/local/share/lua/5.1/nn/Power.lua:18: in function 'updateGradInput'
    /usr/local/share/lua/5.1/nn/Sequential.lua:48: in function 'updateGradInput'
    /usr/local/share/lua/5.1/nn/Sequential.lua:48: in function 'updateGradInput'
    /usr/local/share/lua/5.1/nn/Module.lua:30: in function 'backward'
    <my_persional_lua_script>.lua:1029: in function 'opfunc'
    /usr/local/share/lua/5.1/optim/sgd.lua:40: in function 'optimMethod'

Thanks~

Lack of OpenMP support in LookupTable

I can see several for loops in the Lua code of LookupTable. I'm wondering if the OpenMP framework would help to speed up the LookupTable operation. I will be happy to work on it with your guidance. Thanks.

A question about using cuda

As for me, torch and nn provide a simple and explicit way to use GPU.
soumith/galaxyzoo serves an example for me, but when I tried to use cuda in my own program
LuaJIT give me some error like this:

attempt to call method 'cuda' (a nil value)

or

nn/SpatialConvolutionCUDA.lua:39: attempt to index field 'nn' (a nil value)

so I tried a test like this:

require 'nn'
require 'torch'
a = torch.rand(5,5)
b=a:cuda()

then LuaJIT errors same as the above message.

In addition, after checked the documents of nn and torch, I found that cuda() lies in nn.Module doc, this function can change all the parameters of a module to CudaTensor. While, as we know, something like a:cuda() copies data to GPU memory, but this time it didn't work.

Can anyone help me to figure out this?

nn.Max doesn't seem to work on cuda

nn.Max(2):forward(x) where x is a 4d tensor will fail with error message:
" bad argument #1 to 'forward' (only supported dimension is innermost (CUDA kernel only))"

It seems that nn.Max only works over the last dimension on cuda.

As a temporary hack one could do
nn.Sequential():add(nn.Transpose({2,3})):add(nn.Transpose({3,4})):add(nn.Max(4))
but this is not satisfactory.

Is there a solution to this problem which doesn't involve concatenating two transpose operations?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.