Giter Site home page Giter Site logo

marvin's People

Contributors

andyzeng avatar danielsuo avatar fyu avatar jianxiongxiao avatar shurans avatar xiaojianxiong avatar yindaz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

marvin's Issues

Missing attribution

It seems like some of the code is based off Fast R-CNN, but there is no attribution of this listed anywhere. For example, it is clear that https://github.com/PrincetonVision/marvin/blob/master/marvin.hpp#L1414 is a line-by-line refactoring of Ross's code (copyrighted by Microsoft, licensed with the MIT license). It is important to handle copyright correctly and acknowledge that parts of the code are derivative works of other libraries.

That being said, I think this library is great and I hope people pick it up.

Cannot compile marvin code

How can I solve this problem? Thank you.

nvcc -std=c++11 -O3 -o marvin marvin.cu -I. -I/usr/local/cudnn/v3/include/ -I/usr/local/cuda/include -L/usr/local/cudnn/v3/lib64 -L/usr/local/cuda/lib64 -lcudart -lcublas -lcudnn -lcurand -D_MWAITXINTRIN_H_INCLUDED
/usr/include/c++/5/functional(78): error: class "std::thread" has no member "result_type"
          detected during:
            instantiation of class "std::_Maybe_get_result_type<_Functor, void> [with _Functor=std::thread]" 
(86): here
            instantiation of class "std::_Weak_result_type_impl<_Functor> [with _Functor=std::thread]" 
(184): here
            instantiation of class "std::_Weak_result_type<_Functor> [with _Functor=std::thread]" 
(264): here
            instantiation of class "std::_Reference_wrapper_base_impl<true, true, _Tp> [with _Tp=std::thread]" 
(283): here
            instantiation of class "std::_Reference_wrapper_base<_Tp> [with _Tp=std::thread]" 
(399): here
            instantiation of class "std::reference_wrapper<_Tp> [with _Tp=std::thread]" 
/usr/include/c++/5/future(1638): here

/usr/include/c++/5/functional(266): error: class "std::thread" has no member "argument_type"
          detected during:
            instantiation of class "std::_Reference_wrapper_base_impl<true, true, _Tp> [with _Tp=std::thread]" 
(283): here
            instantiation of class "std::_Reference_wrapper_base<_Tp> [with _Tp=std::thread]" 
(399): here
            instantiation of class "std::reference_wrapper<_Tp> [with _Tp=std::thread]" 
/usr/include/c++/5/future(1638): here

/usr/include/c++/5/functional(267): error: class "std::thread" has no member "first_argument_type"
          detected during:
            instantiation of class "std::_Reference_wrapper_base_impl<true, true, _Tp> [with _Tp=std::thread]" 
(283): here
            instantiation of class "std::_Reference_wrapper_base<_Tp> [with _Tp=std::thread]" 
(399): here
            instantiation of class "std::reference_wrapper<_Tp> [with _Tp=std::thread]" 
/usr/include/c++/5/future(1638): here

/usr/include/c++/5/functional(268): error: class "std::thread" has no member "second_argument_type"
          detected during:
            instantiation of class "std::_Reference_wrapper_base_impl<true, true, _Tp> [with _Tp=std::thread]" 
(283): here
            instantiation of class "std::_Reference_wrapper_base<_Tp> [with _Tp=std::thread]" 
(399): here
            instantiation of class "std::reference_wrapper<_Tp> [with _Tp=std::thread]" 
/usr/include/c++/5/future(1638): here

/usr/include/c++/5/bits/stl_iterator_base_types.h(154): error: name followed by "::" must be a class or namespace name
          detected during:
            instantiation of class "std::__iterator_traits<_Iterator, void> [with _Iterator=int]" 
(163): here
            instantiation of class "std::iterator_traits<_Iterator> [with _Iterator=int]" 
dss.hpp(1321): here

/usr/include/c++/5/bits/stl_iterator_base_types.h(155): error: name followed by "::" must be a class or namespace name
          detected during:
            instantiation of class "std::__iterator_traits<_Iterator, void> [with _Iterator=int]" 
(163): here
            instantiation of class "std::iterator_traits<_Iterator> [with _Iterator=int]" 
dss.hpp(1321): here

/usr/include/c++/5/bits/stl_iterator_base_types.h(156): error: name followed by "::" must be a class or namespace name
          detected during:
            instantiation of class "std::__iterator_traits<_Iterator, void> [with _Iterator=int]" 
(163): here
            instantiation of class "std::iterator_traits<_Iterator> [with _Iterator=int]" 
dss.hpp(1321): here

/usr/include/c++/5/bits/stl_iterator_base_types.h(157): error: name followed by "::" must be a class or namespace name
          detected during:
            instantiation of class "std::__iterator_traits<_Iterator, void> [with _Iterator=int]" 
(163): here
            instantiation of class "std::iterator_traits<_Iterator> [with _Iterator=int]" 
dss.hpp(1321): here

/usr/include/c++/5/bits/stl_iterator_base_types.h(158): error: name followed by "::" must be a class or namespace name
          detected during:
            instantiation of class "std::__iterator_traits<_Iterator, void> [with _Iterator=int]" 
(163): here
            instantiation of class "std::iterator_traits<_Iterator> [with _Iterator=int]" 
dss.hpp(1321): here

9 errors detected in the compilation of "/tmp/tmpxft_000010fd_00000000-9_marvin.cpp1.ii".

For the Share-in with other backprop layers

Hi I checked that the Softmax layer cannot share the in's with other backprop layers. I am wondering what does the outcome look like if I mistakenly share in's for Softmax layer?

Could I write a SplitLayer (like Caffe) to avoid this problem?

Thank you!

cannot find -lcudnn

When I configure marvin , the error confuse me .

/usr/bin/ld: skipping incompatible /usr/local/cudnn/v5/lib64/libcudnn.so when searching for -lcudnn
/usr/bin/ld: cannot find -lcudnn
collect2: error: ld returned 1 exit status

If ld cannot find -lcudnn, there is 3 possibility.

  1. libcudnn do not install
  2. libcudnn version not right
  3. libcudnn.so link configure error.

I try the v4,v5 version of cudnn ,but not working.

Compiling Issue on Ubuntu 16.04

The error is "memcpy not declared" which seems to be a common problem of Ubuntu 16.04 compatibility. I found the same error report in tensorflow, caffee and openCV.https://github.com/tensorflow/tensorflow/pull/2073 ; https://groups.google.com/forum/#!msg/caffe-users/Tm3OsZBwN9Q/XKGRKNdmBAAJ
Their solution is to add cxx_flag: "-D_FORCE_INLINES" to a specific file. However, I've no idea how to do that with Marvin.
And I can run Cuda samples, I also have verified cudnn installation. I think the dependencies are good.

Does anyone get an idea on this?

Environment:
Ubuntu 16.04
GCC v4.8.3
Cuda 7.5
cudnn v5.1

Error Report:
/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
return (char *) memcpy (__dest, __src, __n) + __n;
^
marvin.hpp: At global scope:
marvin.hpp:12:17: note: #pragma message: Compiling using StorageT=half ComputeT=float
#pragma message "Compiling using StorageT=half ComputeT=float"
^

run demo_vis_filter.m error

I use Mac OS X. I want to run the mnist example, and I can train a model correctly by run demo.sh,but get error when demo_vis_filter.m in matlab:

Error using feof
Invalid file identifier. Use fopen to generate a valid file identifier.

Error in readTensors (line 13)
while ~feof(fp)

Error in demo_vis_filter (line 11)
t=readTensors(sprintf('./filters_conv1_%d.tensor',n-1));

compile error on ubuntu

Hi,

I am new to marvin, trying to install it on a ubuntu workstation. I think I had followed the readme and got cuda 7.5 and cudnn v4rc installed properly, downloaded the latest marvin-master, but it seems marvin has some issue with compiling:

lidong@T7600:/marvin$ which nvcc
/usr/local/cuda-7.5/bin/nvcc
lidong@T7600:
/marvin$ echo $LD_LIBRARY_PATH
/usr/local/cuda-7.5/lib64:/usr/local/cudnn/v4rc/lib64:
lidong@T7600:~/marvin$ ./compile.sh
marvin.hpp(6274): error: argument of type "half *" is incompatible with parameter of type "cudnnTensorDescriptor_t"

marvin.hpp(6276): error: argument of type "double" is incompatible with parameter of type "void *"

marvin.hpp(6278): error: argument of type "half *" is incompatible with parameter of type "double"

marvin.hpp(6278): error: too few arguments in function call

4 errors detected in the compilation of "/tmp/tmpxft_00006d3a_00000000-9_marvin.cpp1.ii".

Anyone has any ideas?

Thanks a lot for any suggestions.

Regards,
lidong

configure problem

I try to configure marvin in ubuntu 14.10 gcc 5.2.1 cuda 7.5 cudnn V5 but failed.
Could you tell me your ubuntu & gcc version in success case?

compile error:identifier "CUBLAS_DATA_HALF" is undefined

I have met with the problems with installation.
compile error is here:
marvin.hpp(2226): error: identifier "CUBLAS_DATA_HALF" is undefined 1 error detected in the compilation of "/tmp/tmpxft_00007625_00000000-9_marvin.cpp1.ii".
I have the Ubuntu16.04 with cuda 8.0 and cudnn5.1

ComputeT@half

Hi,
I found marvin.hpp does not support half datatype for ComputeT. May I know is there any plan to support ComputeT@half?
Do you think it is useful to improve speed for marvin@Tx1+cudnn4/5 (0.5TFLOPS@fp32, 1TFLOPS@fp16)?

Thank you.

Add pre-trained models

  • AlexNet trained on ImageNet
  • AlexNet trained on Places
  • GoogLeNet trained on ImageNet
  • GoogLeNet trained on Places
  • VGGNet 16 trained on ImageNet
  • VGGNet 19 trained on ImageNet

plotNet error

>> plotNet()
The following error occurred converting from string to cell:
Conversion to cell from string is not possible.

Error in plotNet (line 21)
            nameofResponse(end+1) = out(j);

error with nvcc.exe when build marvin on windows

  Error 151 error MSB3721: The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\bin\nvcc.exe" -gencode=arch=compute_20,code=\"sm_20,compute_20\" --use-local-env --cl-version 2013 -ccbin "D:\programming\vs2013\VC\bin\x86_amd64"  -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\include"  -G   --keep-dir x64\Debug -maxrregcount=0  --machine 64 --compile -cudart static  -g   -DWIN32 -DWIN64 -D_DEBUG -D_CONSOLE -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Zi /RTC1 /MDd " -o x64\Debug\marvin.cu.obj "E:\marvin\marvin-windows\marvin.cu"" exited with code 2.    C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\BuildCustomizations\CUDA 7.5.targets 604 9   marvin

There also has some other error in concrt.h like:

Error 111 error C2001: newline in constant D:\programming\vs2013\VC\include\concrt.h 5658 1 marvin
Error 116 error C2015: too many characters in constant D:\programming\vs2013\VC\include\concrt.h 5679 1 marvin
Error 125 error C2143: syntax error : missing ')' before ';' D:\programming\vs2013\VC\include\concrt.h 5696 1 marvin

how to get output of hidden layer ?

I am a new user of marvin, and can't find any tutorial about extracting output of hidden layer ? Could anyone can answer my simple question. Thanks.

classfication demo error

@danielsuo I use cuda7.5 and cudnn v5 and the latest marvin, while i am run the classification demo, i got the Segmentation fault, which the gdb says " 0x00007ffff2ace01d in cudnnDropoutGetStatesSize ()
from /usr/local/cudnn/v5/lib64/libcudnn.so.5
", i have no idea how to fix it. Is there anything wrong?
by the way, i got the right response when i run mnist demo,the following is what i do when i run the classification demo:

  1. install cuda7.5 by the .run file and get the graphic driver ok;
  2. download the cudnn-7.5-linux-x64-v5.0-rc.tgz unzip the "include and lib64" to "/usr/local/cudnn/v5"
  3. download marvin and compile
  4. download the classification model and data
  5. run the demo

output is:

Hello, World! This is Marvin. I am at a rough estimate thirty billion times more intelligent than you. Let me give you an example.

[New Thread 0x7ffff0598700 (LWP 17850)]
[New Thread 0x7fffe7bff700 (LWP 17851)]
MemoryDataLayer dataTrain loading data:
75.4819 MB
name:image dim[4]={256,3,227,227}
0.5 KB
name:label dim[4]={256,1,1,1}
301.928 KB
name:imagenet1000 227x227x3 mean image dim[3]={3,227,227}
MemoryDataLayer dataTest loading data:
75.4819 MB
name:image dim[4]={256,3,227,227}
0.5 KB
name:label dim[4]={256,1,1,1}
301.928 KB

name:imagenet1000 227x227x3 mean image dim[3]={3,227,227}

Layers: Responses:

dataTest
data[4]={256,3,227,227} RF[1,1] GP[1,1] OF[0,0]
label[4]={256,1,1,1} RF[1,1] GP[1,1] OF[0,0]
conv1 weight[4]={96,3,11,11} bias[4]={1,96,1,1}
conv1[4]={256,96,55,55} RF[11,11] GP[4,4] OF[0,0]
relu1
norm1
norm1[4]={256,96,55,55} RF[11,11] GP[4,4] OF[0,0]
pool1
pool1[4]={256,96,27,27} RF[19,19] GP[8,8] OF[0,0]
conv2 (2 groups) weight[4]={256,48,5,5} bias[4]={1,256,1,1}
conv2[4]={256,256,27,27} RF[51,51] GP[8,8] OF[-16,-16]
relu2
norm2
norm2[4]={256,256,27,27} RF[51,51] GP[8,8] OF[-16,-16]
pool2
pool2[4]={256,256,13,13} RF[67,67] GP[16,16] OF[-16,-16]
conv3 weight[4]={384,256,3,3} bias[4]={1,384,1,1}
conv3[4]={256,384,13,13} RF[99,99] GP[16,16] OF[-32,-32]
relu3
conv4 (2 groups) weight[4]={384,192,3,3} bias[4]={1,384,1,1}
conv4[4]={256,384,13,13} RF[131,131] GP[16,16] OF[-48,-48]
relu4
conv5 (2 groups) weight[4]={256,192,3,3} bias[4]={1,256,1,1}
conv5[4]={256,256,13,13} RF[163,163] GP[16,16] OF[-64,-64]
relu5
pool5
pool5[4]={256,256,6,6} RF[195,195] GP[32,32] OF[-64,-64]
fc6 weight[2]={4096,9216} bias[1]={4096}
fc6[4]={256,4096,1,1} RF[355,355] GP[0,0] OF[0,0]
relu6
drop6

Program received signal SIGSEGV, Segmentation fault.

0x00007ffff2ace01d in cudnnDropoutGetStatesSize ()
from /usr/local/cudnn/v5/lib64/libcudnn.so.5

windows error

D:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\functional(1148): error : no instance of overloaded function "std::_Pmd_wrap<_Pmd_t, _Rx, _Farg0>::operator() [with _Pmd_t=void (marvin::DiskDataLayer::*)(), _Rx=void (), _Farg0=marvin::DiskDataLayer]" matches the argument list
1>e:\code\marvin_windows\marvin_windows\marvin.hpp(3874): error : no operator "=" matches these operands

windows7 + cuda 7.5 + cudnn v3

compile error

When I use gcc-4.8 to run complie.sh , the error message reported.

marvin.hpp(6266): error: argument of type "const void *" is incompatible with parameter of type "cudnnTensorDescriptor_t"

marvin.hpp(6276): error: argument of type "half *" is incompatible with parameter of type "double"

marvin.hpp(6278): error: argument of type "double" is incompatible with parameter of type "const void *"

marvin.hpp(6279): error: too many arguments in function call

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.