Giter Site home page Giter Site logo

deep-visualization-toolbox's People

Contributors

arikpoz avatar grisaitis avatar jliemansifry avatar lionleaf avatar lukeyeager avatar yosinski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-visualization-toolbox's Issues

Error when toggling backprop view

I get this when I try to toggle between fordward activations and backprop view:

Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/home/smcallis/tmp/deep-visualization-toolbox/caffevis/app.py", line 158, in run
self.net.backward_from_layer(backprop_layer, diffs, zero_higher = True)
AttributeError: 'Classifier' object has no attribute 'backward_from_layer'

And then the GUI seems to hang. I believe I'm on the latest caffe (from github)

Using Kinect camera as the webcam for the toolbox

Hi everybody,
Does anyone have the experience of using Kinect camera as the webcam for Deepvis.
Using Fedora 23, the toolbox itself is OK but does not detect Kinect while I can use it easily via "camorama" or "cheese" in Fedora.
As I noticed, (in "setting.py" line 34) there is no number (instead of 0) dedicated to Kinect.
Any Idea?
Also, I have still some problems importing static files as the input into the toolbox.
Did I miss something in the installation which is not mentioned in the Manual?
So, at the moment I do not have any input!

Sorry for being too easy (Maybe?) :)

Ali

Cannot use GPU in CPU-only Caffe: check mode.

When I try to run the run_toolbox.py ,CPU mode (which I have done it all along) , get the following error exactly:

F0519 00:48:29.772080 26710 conv_layer.cpp:76] Cannot use GPU in CPU-only Caffe: check mode.

Please help. I cannot locate what to change where.

I am getting zero outputs and blank squares

I followed all the steps and executed ./run_toolbox.py . I didn't get any errors. But I am receiving blank output. I also have force_backward: true in my prototxt file. I am attaching the screenshot below. Please suggest me what I am doing wrong.

Thanks
deepvis

segmentation fault : 11

Thank you for sharing this great toolbox.

I have compiled the master branch of caffe using anaconda python, and it is working correctly. Following the steps in your tutorial I have had everything running on Ubuntu.

On Mac OS X ElCapitan, I am able to import caffe, cv, cv2, scipy, skimage.. but when running ./run_toolbox.py I get the following error

got module <module 'caffevis.app' from '/Users/ahmedabobakr/deep-visualization-toolbox/caffevis/app.pyc'>
got app <class 'caffevis.app.CaffeVisApp'>
Got settings <module 'settings' from '/Users/ahmedabobakr/deep-visualization-toolbox/settings.pyc'>
Segmentation fault: 11

python-opencv dependency

To use this tool, you must have python-opencv installed. Otherwise you get this error:

Error: Could not import cv2, please install it first.
Traceback (most recent call last):
  File "./run_toolbox.py", line 3, in <module>
    from core import LiveVis
  File "/home/lyeager/code/deep-visualization-toolbox/core.py", line 14, in <module>
    import cv2
ImportError: No module named cv2

That package isn't required for caffe, so you should mention it in your installation instructions.

I installed it with sudo apt-get install python-opencv, but you could probably install it with pip if you preferred.

How can I customize it for other models?

Hi,

Is this very strongly tied with the model and the dataset that comes with it?
I was able to build the pkg, and run the app (run_toolbox.py) alright. Now, I want to visualize a caffe model that I trained myself. All I want the app to do is to load a network ( model prototxt + the weights saved in a file), and then let me peek into the deeper layers by visualizing them (using any of the techniques that are discussed in the paper).

I don't want to load things such as the mean file, or an image corpus, or anything else.
Rather, I want to visualize the neurons out of those contexts - just using the synthetic visualizations.

What would be the quickest way to do that?

question-2: I see that the example model (caffenet-yos-deploy.prototxt) begins like -

input:data
input_dim: 1
input_dim: 3
input_dim: 227
input_dim:227
...

but the input blobs in my model is given in a slightly different syntax ..

layer{
name: "data"
type: "ImageData" ..
..
.
the app actually throws an error while trying to load that.

in file .../caffe/python/caffe/classifier.py", line 29, in init

in_ = self.inputs[0]
IndexError: list index out of range.

How exactly, are we supposed to write syntax for these models?
My understanding is that any caffe model should have worked, is that correct?

Thanks a lot.

generating sizes larger than 227x227, error resizing mean

is it possible to generate images larger than 227x227 using the optimize_image.py script?

For example, when using --data-size '(500,500)' the script generates the error:
Traceback (most recent call last):
File "./optimize_image.py", line 224, in
main()
File "./optimize_image.py", line 186, in main
raw_scale = 1.0,
File "/root/caffe-yosinski/python/caffe/classifier.py", line 34, in init
self.transformer.set_mean(in_, mean)
File "/root/caffe-yosinski/python/caffe/io.py", line 258, in set_mean
raise ValueError('Mean shape incompatible with input shape.')
ValueError: Mean shape incompatible with input shape.

Following the suggested workaround here (http://stackoverflow.com/questions/30808735/error-when-using-classify-in-caffe) the revised code (edited in caffenet-yos/python/caffe/io.py) generates another error:

(1, 3, 227, 227)
/usr/local/lib/python2.7/dist-packages/skimage/util/dtype.py:110: UserWarning: Possible precision loss when converting from int64 to float64
"%s to %s" % (dtypeobj_in, dtypeobj))
Traceback (most recent call last):
File "./optimize_image.py", line 224, in
main()
File "./optimize_image.py", line 186, in main
raw_scale = 1.0,
File "/root/caffe-yosinski/python/caffe/classifier.py", line 34, in init
self.transformer.set_mean(in_, mean)
File "/root/caffe-yosinski/python/caffe/io.py", line 264, in set_mean
mean = resize_image(normal_mean.transpose((1,2,0)),in_shape[1:]).transpose((2,0,1)) * (m_max - m_min) + m_min
File "/root/caffe-yosinski/python/caffe/io.py", line 332, in resize_image
resized_std = resize(im_std, new_dims, order=interp_order)
File "/usr/local/lib/python2.7/dist-packages/skimage/transform/_warps.py", line 119, in resize
preserve_range=preserve_range)
File "/usr/local/lib/python2.7/dist-packages/skimage/transform/_geometric.py", line 1296, in warp
image = _convert_warp_input(image, preserve_range)
File "/usr/local/lib/python2.7/dist-packages/skimage/transform/_geometric.py", line 1108, in _convert_warp_input
image = img_as_float(image)
File "/usr/local/lib/python2.7/dist-packages/skimage/util/dtype.py", line 301, in img_as_float
return convert(image, np.float64, force_copy)
File "/usr/local/lib/python2.7/dist-packages/skimage/util/dtype.py", line 241, in convert
np.float32, np.float64))
File "/usr/local/lib/python2.7/dist-packages/skimage/util/dtype.py", line 114, in _dtype
return next(dt for dt in dtypes if itemsize < np.dtype(dt).itemsize)
StopIteration

and doesn't complete the image optimization process.

Is there a way to reformat the mean file (or 3 value tuple) in your code? Or would it better to regenerate the .npy mean file for a larger size.

Thanks.

Update the readme file

Hello,
First of all thank you for your fantastic job in creating the toolbox.
second of all I would like to address several matters that confused me.
As far as I know, the latest version of caffe supports deconvolutional layers as well,
knowing that and also this that, the readme was created for the former version initially, and seemingly has not been updated so far:

  1. Is it still mandatory to install the specific version of caffe mentioned in readme step 1(deconv-deep-vis-toolbox branch of caffe)?
  2. Is there some specific reasons, as to why this toolbox is not said to as being compatible with windows? since caffe is officially supported now, and I guess the rest can be easily ported to windows.
    this made me before going and doing anything, ask whether the amount of changes the needed an update in the guide is more than just these issues I noticed.

Thanks alot again

web cam not work

deepvis start and work , but if press "f" to unfreeze cam , nothing happens(but cam works fine in opencv). the same problem if press "s" , i havent filter visualisation, only convolution.

Code to create montages / question about backpropegation in max tracker

I can't find the code to create the montage images from the cropped activations and deconvolutions. Is it included or could you release the code for that?

Also, at line 309 of max tracker, you set the diff to 1.0 but doesn't it make more sense to set it to the recorded_val so that it is deconvoluting/backpropegation based on the maximum strength of the activation? This also differs from the tool since that backpropagate the entire activation layer instead at line 89 of caffe_proc_thread.py
diffs[0][backprop_unit] = self.net.blobs[backprop_layer].data[0,backprop_unit]

make -j pycaffe error

Hi, yosinski
I have the same problem as yrevar

/usr/local/include/boost/python/detail/defaults_gen.hpp:225:46: note: expanded
from macro BOOST_PYTHON_OVERLOAD_CONSTRUCTORSN,n_args>::too_many_keywords assertion...^16 warnings generated.
Undefined symbols for architecture x86_64:"caffe::Net::DeconvFromTo(int, int)", referenced from:
caffe::init_module__caffe() in _caffe-e735b5.old:
symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [python/caffe/_caffe.so] Error 1

and I have delete all the libcaffe.so file and soft link

but the difference is this

./include/caffe/net.hpp:81: void DeconvFromTo(int start, int end);
./python/caffe/caffe.cpp:231: .def("deconv", &Net::DeconvFromTo)
./src/caffe/net.cpp:595:void Net::DeconvFromTo(int start, int end) {
./src/caffe/net.cpp:772: DeconvFromTo(start, 0);
./src/caffe/net.cpp:777: DeconvFromTo(layers
.size() - 1, end);
./src/caffe/net.cpp:782: DeconvFromTo(layers
.size() - 1, 0);

we are not the same code here

thanks for any idea to solve this problem here

And my system is OS 10.10, use Anaconda2, CPU_ONLY mode, no NVIDIA GPUs

Just start and got error in the first line :/

So I just start to use this toolbox, Im very new and sorry if my question is stupid!
I went to the folder that I have caffe already set and from there I did:
Deep Visualization Toolbox
I was not expect to get error here bu I got this error:
Not a git repository (or any of the parent directories): .git
Can anyone help please.

Problem at step 4

I just simply run: $ ./run_toolbox.py.
It shows that "self.heartbeat_lock" in 'core.py' is invalid syntax.
How to deal with this?

Gtk-ERROR **: GTK+ 2.x symbols detected

If I run the tool, I get this error:
(run_toolbox.py:16194): Gtk-ERROR **: GTK+ 2.x symbols detected. Using GTK+ 2.x and GTK+ 3 in the same process is not supported
Trace/breakpoint trap (core dumped)

It seems this is an issue with opencv. I get this error even with OpenCV 3.0.0. When I add "import gtk" to
deep-visualization-toolbox/core.py, this error goes away and the tool works fine.

  • K

compilation issues

when I run make -j (after following earlier steps) I get following error

/include/caffe/common.hpp:5:27: fatal error: gflags/gflags.h: No such file or directory
#include <gflags/gflags.h>

And indeed gflags.h is missing. Am I missing something here?

Segmentation fault

Hi,

Thanks for sharing this toolbox, it's very interesting job!

I followed your guidance to setup, and it seems that it's configured correctly, however, when I start running the toolbox, it shows segmentation fault error, the log is as follows:

I0715 14:43:45.048436   452 upgrade_proto.cpp:60] Successfully upgraded file specified using deprecated V1LayerParameter
I0715 14:43:45.049624   452 net.cpp:804] Ignoring source layer data
I0715 14:43:45.117197   452 net.cpp:804] Ignoring source layer loss


Running toolbox. Push h for help or q to quit.


InputImageFetcher: bind_camera starting
InputImageFetcher: skipping camera bind (device: None)
InputImageFetcher: bind_camera finished
Starting app: CaffeVisApp
  Prettified layer name: "pool1" -> "p1"
  Prettified layer name: "norm1" -> "n1"
  Prettified layer name: "pool2" -> "p2"
  Prettified layer name: "norm2" -> "n2"
  Prettified layer name: "pool5" -> "p5"
CaffeProcThread.run called
CaffeVisApp mode (in CaffeProcThread): CPU
JPGVisLoadingThread.run called
JPGVisLoadingThread.run: caffe_net_state is: free
JPGVisLoadingThread.run loop: next_frame: None, caffe_net_state: free, back_enabled: False
jpgvis_to_load_key: None
Segmentation fault (core dumped)

I traced to the execution process and find the in JPGVisLoadingThread.run function, the jpgvis_to_load_key is None, but I'm not sure whether this is the reason that caused the dumped fault.

BTW, there is one time that the program is executing correctly and shows the school bus image demo, but it NEVER reoccurred and I didn't change anything. I'm still wondering why it happened...

I'm new to this toolbox and still not familiar with it, what I'm trying to do is just to let it work and get a main idea of how it works and see if I can used it for my demo. Anyone who can help me with this?

Any advice is really appreciated! Thank you

Failed runing tool box in GPU mode

I've successfully run the toolbox in CPU mode. And then I recompiled caffe with cuDNN support to run the toolbox in GPU mode, I got messages below and it terminated. Can you help me figure out what's going on, and how to fix it.

BTW: I'm merge caffe/windows branch with deconv-deep-vis-toolbox branch, and I successfully compiled. And the toolbox runs well in CPU mode.

CaffeVisApp mode (in main thread): GPU
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0601 14:43:17.079457 17692 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I0601 14:43:17.081459 17692 upgrade_proto.cpp:52] Attempting to upgrade input file specified using deprecated V1LayerParameter: C:\Users\meitu\Downloads\TEST\deep-visualization-toolbox/models/caffenet-yos/caffenet-yos-deploy.prototxt
I0601 14:43:17.081459 17692 upgrade_proto.cpp:60] Successfully upgraded file specified using deprecated V1LayerParameter
I0601 14:43:17.081459 17692 upgrade_proto.cpp:66] Attempting to upgrade input file specified using deprecated input fields: C:\Users\meitu\Downloads\TEST\deep-visualization-toolbox/models/caffenet-yos/caffenet-yos-deploy.prototxt
I0601 14:43:17.081459 17692 upgrade_proto.cpp:69] Successfully upgraded file specified using deprecated input fields.
W0601 14:43:17.081459 17692 upgrade_proto.cpp:71] Note that future Caffe releases will only support input layers and not input fields.
I0601 14:43:17.082459 17692 net.cpp:49] Initializing net from parameters:
name: "CaffeNet"
force_backward: true
state {
phase: TEST
}
layer {
name: "input"
type: "Input"
top: "data"
input_param {
shape {
dim: 1
dim: 3
dim: 227
dim: 227
}
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
convolution_param {
num_output: 96
kernel_size: 11
stride: 4
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "norm1"
type: "LRN"
bottom: "pool1"
top: "norm1"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "norm1"
top: "conv2"
convolution_param {
num_output: 256
pad: 2
kernel_size: 5
group: 2
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "norm2"
type: "LRN"
bottom: "pool2"
top: "norm2"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "conv3"
type: "Convolution"
bottom: "norm2"
top: "conv3"
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}
layer {
name: "conv4"
type: "Convolution"
bottom: "conv3"
top: "conv4"
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
group: 2
}
}
layer {
name: "relu4"
type: "ReLU"
bottom: "conv4"
top: "conv4"
}
layer {
name: "conv5"
type: "Convolution"
bottom: "conv4"
top: "conv5"
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
group: 2
}
}
layer {
name: "relu5"
type: "ReLU"
bottom: "conv5"
top: "conv5"
}
layer {
name: "pool5"
type: "Pooling"
bottom: "conv5"
top: "pool5"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "fc6"
type: "InnerProduct"
bottom: "pool5"
top: "fc6"
inner_product_param {
num_output: 4096
}
}
layer {
name: "relu6"
type: "ReLU"
bottom: "fc6"
top: "fc6"
}
layer {
name: "drop6"
type: "Dropout"
bottom: "fc6"
top: "fc6"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "fc7"
type: "InnerProduct"
bottom: "fc6"
top: "fc7"
inner_product_param {
num_output: 4096
}
}
layer {
name: "relu7"
type: "ReLU"
bottom: "fc7"
top: "fc7"
}
layer {
name: "drop7"
type: "Dropout"
bottom: "fc7"
top: "fc7"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "fc8"
type: "InnerProduct"
bottom: "fc7"
top: "fc8"
inner_product_param {
num_output: 1000
}
}
layer {
name: "prob"
type: "Softmax"
bottom: "fc8"
top: "prob"
}
I0601 14:43:17.082459 17692 lay*** Check failure stack trace: ***

Metaresult: majority failure

Hello All,

I am trying to generate the per unit visualizations for my own model. And, I am getting the following error when I try to run the optimize_image.py file. This is happening for the default caffenet-yos models as well.

Metaresult: majority failure

Any help would be appreciated.

Thanks.

Keypresses not registering

Running loop. Push h for help or q to quit.

When I try to press h (or H or Ctrl+H, etc.), I just get this in the console:

...........................................................................Not sure what to do with key 1048680
..............................................Not sure what to do with key 1114081
.................................Not sure what to do with key 1114184
.............................................................Not sure what to do with key 1114083
...Not sure what to do with key 1376235
.......................Not sure what to do with key 5505128
......................................................

So I can't really do anything with this tool. Am I missing something obvious? I have a standard keyboard.

ubuntu run error

Hi I've done everything as described in the instruction, but when I run the run_toolbox.py

The last lines are like the following:
Im using ubuntu 14.04 with anaconda and all else.

Thanks!

I0602 15:13:34.449769 5788 net.cpp:804] Ignoring source layer data
I0602 15:13:34.498936 5788 net.cpp:804] Ignoring source layer loss

Running toolbox. Push h for help or q to quit.

OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvNamedWindow, file -------src-dir-------/opencv-2.4.10/modules/highgui/src/window.cpp, line 483
Traceback (most recent call last):
File "./run_toolbox.py", line 33, in
main()
File "./run_toolbox.py", line 28, in main
lv.run_loop()
File "/home/heisenberg/Codings/deep-visualization-toolbox/live_vis.py", line 121, in run_loop
self.init_window()
File "/home/heisenberg/Codings/deep-visualization-toolbox/live_vis.py", line 84, in init_window
cv2.namedWindow(self.window_name)
cv2.error: -------src-dir-------/opencv-2.4.10/modules/highgui/src/window.cpp:483: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvNamedWindow

Is that possible to visualise the VGG Net?

Thanks for release this useful tool.

I want to know how to use this tool to just visualise the VGG channel activations, not the synthetic images or deconv.

Have anyone achieved this?

Thanks.

Does this tool support per channel mean values?

I tried to import data mean in the form:

cPickle.dump(np.array([105.908874512, 114.063842773, 116.282836914])[None,:], open('mean.npy', 'wb'))

But it raises error. I guess this tool does not support per channel means but mean image. Is this true?

1-channel network

Any help on using 1-channel input images (I have a network trained on grayscale 1-channel images)?
I'm trying to modify image_misc.py, core.py etc but haven't succeeded yet.

deconv visualization 'a' doesn't work

On my system, the basic visualization of layers, activations, and images work.

When I press 'a', the interface does not crash, but enters a useless state. I get the following message in console:

Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/home/yiliu/deep-visualization-toolbox/caffevis/app.py", line 158, in run
self.net.backward_from_layer(backprop_layer, diffs, zero_higher = True)
AttributeError: 'Classifier' object has no attribute 'backward_from_layer'

I am on the master branch of caffe. Perhaps you exported more functions locally on your own system?

Issue using numpy 1.10.1

Hi,

I have the program up and running, however there are two functions affected which both return the error

TypeError: Cannot cast ufunc multiply output from dtype('float64') to dtype('uint8') with casting rule 'same_kind'

This occurs when attempting to access the webcam (by pressing c), which reports:

Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/home/alex/Programs/deep-visualization-toolbox/core.py", line 172, in run
    frame_full = read_cam_frame(self.bound_cap_device)
  File "/home/alex/Programs/deep-visualization-toolbox/image_misc.py", line 61, in read_cam_frame
    frame *= (255.0 / (frame.max() + 1e-6))
TypeError: Cannot cast ufunc multiply output from dtype('float64') to dtype('uint8') with casting rule 'same_kind'

or by trying to access the list of help commands:

Traceback (most recent call last):
  File "./run_toolbox.py", line 29, in <module>
    main()
  File "./run_toolbox.py", line 24, in main
    lv.run_loop()
  File "/home/alex/Programs/deep-visualization-toolbox/core.py", line 439, in run_loop
    self.draw_help()
  File "/home/alex/Programs/deep-visualization-toolbox/core.py", line 529, in draw_help
    self.help_buffer[:] *= .7
TypeError: Cannot cast ufunc multiply output from dtype('float64') to dtype('uint8') with casting rule 'same_kind'

My google efforts have churned out this thread on inplace type castings, which behave differently in earlier python versions and wouldn't throw up this error. numpy/numpy#6198

Have any other numpy 1.10.1 users had this problem?

The parts I have running are amazing!

OpenCV Error

Hi,

I have an anaconda installed and I am able to import cv2 from Python, but when I run ./run_toolbox.py, I get the following error:

Runningtoolbox. Push h for help or q to quit.
OpenCV` Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvNamedWindow, file -------src-dir-------/opencv-2.4.10/modules/highgui/src/window.cpp, line 483
Traceback (most recent call last):
File "./run_toolbox.py", line 29, in
main()
File "./run_toolbox.py", line 24, in main
lv.run_loop()
File "/home/rick/Documents/deep-visualization-toolbox/core.py", line 309, in run_loop
self.init_window()
File "/home/rick/Documents/deep-visualization-toolbox/core.py", line 266, in init_window
cv2.namedWindow(self.window_name)
cv2.error: -------src-dir-------/opencv-2.4.10/modules/highgui/src/window.cpp:483: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvNamedWindow

Changes for other data set for example CIFAR10

This is indeed a good tool. I thought if I change the weights to CIFAR10 trained network and labels, the tool works on CIFAR10 images, But not true. Looks like the code assumes that the input images is 227x227 and comes from Imagenet. What changes should we do for working the tool for CIFAR10 for example?

Also, what are the images that are shown on the right side? How is that related to the main conv images?

BGR->RGB convert issue

Hi,

After deploying this project successfully, I tried to run ./run_toolbox.py, but I got the following error and traceback:

  File "./run_toolbox.py", line 29, in <module>
    main()
  File "./run_toolbox.py", line 24, in main
    lv.run_loop()
  File "/home/zszhang/caffetoolbox/deep-visualization-toolbox/core.py", line 405, in run_loop
    self.display_frame(latest_frame_data)
  File "/home/zszhang/caffetoolbox/deep-visualization-toolbox/core.py", line 526, in display_frame
    self.panes['input'].data[:] = frame_disp
ValueError: could not broadcast input array from shape (270,270) into shape (270,270,3)

Then I read some of the source code, I found the im = im[:,:,::-1] may not work well on my system.
I changed the im = im[:,:,::-1] to im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB) in cv2_read_file_rgb and changed cv2.imshow(window_name, img[:,:,::-1]) to img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) and cv2.imshow(window_name, img) in cv2_imshow_rgb, then everything goes well (Only static image files are used).

I don't know whether it is a general issue or it happens because my system configuration is odd or incorrect. Any ideas?

Thanks,
Baird

Error starting the toolbox

I get the error below when I try to start the toolbox,
All the activation show black image, not sure why, any suggestions?
(I use Ubuntu 12.04, I had a separate master branch of caffe and it works fine)

32891498-2f9a-11e5-9539-4240f97e4905

optimize_image.py: Every unit returns 'Grad Exactly 0, failed'

I'm trying to visualize my grasp detection network, which is AlexNet adapted for regression. I've found that every unit I try to optimize returns with the 0 gradient error. The network works as intended. Here's the output:

python optimize_image.py --net-weights /home/joe/DeepGrasping/Models/caffeGraspTrainX_iter_10000.caffemodel --deploy-proto /home/joe/workspace/code/caffe/models/grasp/graspDeploy.prototxt --data-size 224,224 --push-layer conv1 --push-channel 1 --push-spatial 6,6 --decay 0.0001 --blur-radius 1.0 --blur-every 4 --lr-params "{'lr': 100.0}" --brave

I0526 14:49:57.373548 15181 layer_factory.hpp:77] Creating layer input
I0526 14:49:57.373570 15181 net.cpp:91] Creating Layer input
I0526 14:49:57.373580 15181 net.cpp:399] input -> data
I0526 14:49:57.373613 15181 net.cpp:141] Setting up input
I0526 14:49:57.373626 15181 net.cpp:148] Top shape: 1 3 224 224 (150528)
I0526 14:49:57.373634 15181 net.cpp:156] Memory required for data: 602112
I0526 14:49:57.373643 15181 layer_factory.hpp:77] Creating layer conv1
I0526 14:49:57.373657 15181 net.cpp:91] Creating Layer conv1
I0526 14:49:57.373667 15181 net.cpp:425] conv1 <- data
I0526 14:49:57.373677 15181 net.cpp:399] conv1 -> conv1
I0526 14:49:57.375252 15181 net.cpp:141] Setting up conv1
I0526 14:49:57.375278 15181 net.cpp:148] Top shape: 1 96 54 54 (279936)
I0526 14:49:57.375286 15181 net.cpp:156] Memory required for data: 1721856
I0526 14:49:57.375305 15181 layer_factory.hpp:77] Creating layer relu1
I0526 14:49:57.375319 15181 net.cpp:91] Creating Layer relu1
I0526 14:49:57.375329 15181 net.cpp:425] relu1 <- conv1
I0526 14:49:57.375339 15181 net.cpp:386] relu1 -> conv1 (in-place)
I0526 14:49:57.375356 15181 net.cpp:141] Setting up relu1
I0526 14:49:57.375366 15181 net.cpp:148] Top shape: 1 96 54 54 (279936)
I0526 14:49:57.375375 15181 net.cpp:156] Memory required for data: 2841600
I0526 14:49:57.375382 15181 layer_factory.hpp:77] Creating layer norm1
I0526 14:49:57.375393 15181 net.cpp:91] Creating Layer norm1
I0526 14:49:57.375401 15181 net.cpp:425] norm1 <- conv1
I0526 14:49:57.375411 15181 net.cpp:399] norm1 -> norm1
I0526 14:49:57.375424 15181 net.cpp:141] Setting up norm1
I0526 14:49:57.375434 15181 net.cpp:148] Top shape: 1 96 54 54 (279936)
I0526 14:49:57.375442 15181 net.cpp:156] Memory required for data: 3961344
I0526 14:49:57.375450 15181 layer_factory.hpp:77] Creating layer pool1
I0526 14:49:57.375461 15181 net.cpp:91] Creating Layer pool1
I0526 14:49:57.375469 15181 net.cpp:425] pool1 <- norm1
I0526 14:49:57.375479 15181 net.cpp:399] pool1 -> pool1
I0526 14:49:57.375497 15181 net.cpp:141] Setting up pool1
I0526 14:49:57.375507 15181 net.cpp:148] Top shape: 1 96 27 27 (69984)
I0526 14:49:57.375514 15181 net.cpp:156] Memory required for data: 4241280
I0526 14:49:57.375522 15181 layer_factory.hpp:77] Creating layer conv2
I0526 14:49:57.375536 15181 net.cpp:91] Creating Layer conv2
I0526 14:49:57.375545 15181 net.cpp:425] conv2 <- pool1
I0526 14:49:57.375555 15181 net.cpp:399] conv2 -> conv2
I0526 14:49:57.385344 15181 net.cpp:141] Setting up conv2
I0526 14:49:57.385375 15181 net.cpp:148] Top shape: 1 256 27 27 (186624)
I0526 14:49:57.385383 15181 net.cpp:156] Memory required for data: 4987776
I0526 14:49:57.385401 15181 layer_factory.hpp:77] Creating layer relu2
I0526 14:49:57.385416 15181 net.cpp:91] Creating Layer relu2
I0526 14:49:57.385426 15181 net.cpp:425] relu2 <- conv2
I0526 14:49:57.385437 15181 net.cpp:386] relu2 -> conv2 (in-place)
I0526 14:49:57.385449 15181 net.cpp:141] Setting up relu2
I0526 14:49:57.385459 15181 net.cpp:148] Top shape: 1 256 27 27 (186624)
I0526 14:49:57.385467 15181 net.cpp:156] Memory required for data: 5734272
I0526 14:49:57.385474 15181 layer_factory.hpp:77] Creating layer norm2
I0526 14:49:57.385485 15181 net.cpp:91] Creating Layer norm2
I0526 14:49:57.385493 15181 net.cpp:425] norm2 <- conv2
I0526 14:49:57.385504 15181 net.cpp:399] norm2 -> norm2
I0526 14:49:57.385517 15181 net.cpp:141] Setting up norm2
I0526 14:49:57.385526 15181 net.cpp:148] Top shape: 1 256 27 27 (186624)
I0526 14:49:57.385535 15181 net.cpp:156] Memory required for data: 6480768
I0526 14:49:57.385541 15181 layer_factory.hpp:77] Creating layer pool2
I0526 14:49:57.385553 15181 net.cpp:91] Creating Layer pool2
I0526 14:49:57.385561 15181 net.cpp:425] pool2 <- norm2
I0526 14:49:57.385571 15181 net.cpp:399] pool2 -> pool2
I0526 14:49:57.385584 15181 net.cpp:141] Setting up pool2
I0526 14:49:57.385594 15181 net.cpp:148] Top shape: 1 256 13 13 (43264)
I0526 14:49:57.385601 15181 net.cpp:156] Memory required for data: 6653824
I0526 14:49:57.385609 15181 layer_factory.hpp:77] Creating layer conv3
I0526 14:49:57.385625 15181 net.cpp:91] Creating Layer conv3
I0526 14:49:57.385634 15181 net.cpp:425] conv3 <- pool2
I0526 14:49:57.385644 15181 net.cpp:399] conv3 -> conv3
I0526 14:49:57.414319 15181 net.cpp:141] Setting up conv3
I0526 14:49:57.414361 15181 net.cpp:148] Top shape: 1 384 13 13 (64896)
I0526 14:49:57.414374 15181 net.cpp:156] Memory required for data: 6913408
I0526 14:49:57.414404 15181 layer_factory.hpp:77] Creating layer relu3
I0526 14:49:57.414429 15181 net.cpp:91] Creating Layer relu3
I0526 14:49:57.414446 15181 net.cpp:425] relu3 <- conv3
I0526 14:49:57.414464 15181 net.cpp:386] relu3 -> conv3 (in-place)
I0526 14:49:57.414485 15181 net.cpp:141] Setting up relu3
I0526 14:49:57.414507 15181 net.cpp:148] Top shape: 1 384 13 13 (64896)
I0526 14:49:57.414520 15181 net.cpp:156] Memory required for data: 7172992
I0526 14:49:57.414535 15181 layer_factory.hpp:77] Creating layer conv4
I0526 14:49:57.414561 15181 net.cpp:91] Creating Layer conv4
I0526 14:49:57.414578 15181 net.cpp:425] conv4 <- conv3
I0526 14:49:57.414597 15181 net.cpp:399] conv4 -> conv4
I0526 14:49:57.435600 15181 net.cpp:141] Setting up conv4
I0526 14:49:57.435652 15181 net.cpp:148] Top shape: 1 384 13 13 (64896)
I0526 14:49:57.435664 15181 net.cpp:156] Memory required for data: 7432576
I0526 14:49:57.435689 15181 layer_factory.hpp:77] Creating layer relu4
I0526 14:49:57.435714 15181 net.cpp:91] Creating Layer relu4
I0526 14:49:57.435731 15181 net.cpp:425] relu4 <- conv4
I0526 14:49:57.435748 15181 net.cpp:386] relu4 -> conv4 (in-place)
I0526 14:49:57.435770 15181 net.cpp:141] Setting up relu4
I0526 14:49:57.435786 15181 net.cpp:148] Top shape: 1 384 13 13 (64896)
I0526 14:49:57.435798 15181 net.cpp:156] Memory required for data: 7692160
I0526 14:49:57.435811 15181 layer_factory.hpp:77] Creating layer conv5
I0526 14:49:57.435837 15181 net.cpp:91] Creating Layer conv5
I0526 14:49:57.435853 15181 net.cpp:425] conv5 <- conv4
I0526 14:49:57.435870 15181 net.cpp:399] conv5 -> conv5
I0526 14:49:57.449239 15181 net.cpp:141] Setting up conv5
I0526 14:49:57.449275 15181 net.cpp:148] Top shape: 1 256 13 13 (43264)
I0526 14:49:57.449287 15181 net.cpp:156] Memory required for data: 7865216
I0526 14:49:57.449317 15181 layer_factory.hpp:77] Creating layer relu5x
I0526 14:49:57.449338 15181 net.cpp:91] Creating Layer relu5x
I0526 14:49:57.449354 15181 net.cpp:425] relu5x <- conv5
I0526 14:49:57.449371 15181 net.cpp:386] relu5x -> conv5 (in-place)
I0526 14:49:57.449391 15181 net.cpp:141] Setting up relu5x
I0526 14:49:57.449407 15181 net.cpp:148] Top shape: 1 256 13 13 (43264)
I0526 14:49:57.449419 15181 net.cpp:156] Memory required for data: 8038272
I0526 14:49:57.449432 15181 layer_factory.hpp:77] Creating layer pool5x
I0526 14:49:57.449451 15181 net.cpp:91] Creating Layer pool5x
I0526 14:49:57.449465 15181 net.cpp:425] pool5x <- conv5
I0526 14:49:57.449481 15181 net.cpp:399] pool5x -> pool5x
I0526 14:49:57.449503 15181 net.cpp:141] Setting up pool5x
I0526 14:49:57.449523 15181 net.cpp:148] Top shape: 1 256 6 6 (9216)
I0526 14:49:57.449535 15181 net.cpp:156] Memory required for data: 8075136
I0526 14:49:57.449548 15181 layer_factory.hpp:77] Creating layer fc6x
I0526 14:49:57.449581 15181 net.cpp:91] Creating Layer fc6x
I0526 14:49:57.449594 15181 net.cpp:425] fc6x <- pool5x
I0526 14:49:57.449612 15181 net.cpp:399] fc6x -> fc6x
I0526 14:49:57.455247 15181 net.cpp:141] Setting up fc6x
I0526 14:49:57.455301 15181 net.cpp:148] Top shape: 1 512 (512)
I0526 14:49:57.455312 15181 net.cpp:156] Memory required for data: 8077184
I0526 14:49:57.455333 15181 layer_factory.hpp:77] Creating layer relu6x
I0526 14:49:57.455354 15181 net.cpp:91] Creating Layer relu6x
I0526 14:49:57.455375 15181 net.cpp:425] relu6x <- fc6x
I0526 14:49:57.455396 15181 net.cpp:386] relu6x -> fc6x (in-place)
I0526 14:49:57.455418 15181 net.cpp:141] Setting up relu6x
I0526 14:49:57.455435 15181 net.cpp:148] Top shape: 1 512 (512)
I0526 14:49:57.455447 15181 net.cpp:156] Memory required for data: 8079232
I0526 14:49:57.455461 15181 layer_factory.hpp:77] Creating layer drop6x
I0526 14:49:57.455477 15181 net.cpp:91] Creating Layer drop6x
I0526 14:49:57.455492 15181 net.cpp:425] drop6x <- fc6x
I0526 14:49:57.455508 15181 net.cpp:386] drop6x -> fc6x (in-place)
I0526 14:49:57.455528 15181 net.cpp:141] Setting up drop6x
I0526 14:49:57.455544 15181 net.cpp:148] Top shape: 1 512 (512)
I0526 14:49:57.455556 15181 net.cpp:156] Memory required for data: 8081280
I0526 14:49:57.455569 15181 layer_factory.hpp:77] Creating layer fc7x
I0526 14:49:57.455587 15181 net.cpp:91] Creating Layer fc7x
I0526 14:49:57.455600 15181 net.cpp:425] fc7x <- fc6x
I0526 14:49:57.455620 15181 net.cpp:399] fc7x -> fc7x
I0526 14:49:57.455997 15181 net.cpp:141] Setting up fc7x
I0526 14:49:57.456015 15181 net.cpp:148] Top shape: 1 512 (512)
I0526 14:49:57.456028 15181 net.cpp:156] Memory required for data: 8083328
I0526 14:49:57.456046 15181 layer_factory.hpp:77] Creating layer relu7x
I0526 14:49:57.456064 15181 net.cpp:91] Creating Layer relu7x
I0526 14:49:57.456079 15181 net.cpp:425] relu7x <- fc7x
I0526 14:49:57.456094 15181 net.cpp:386] relu7x -> fc7x (in-place)
I0526 14:49:57.456110 15181 net.cpp:141] Setting up relu7x
I0526 14:49:57.456125 15181 net.cpp:148] Top shape: 1 512 (512)
I0526 14:49:57.456135 15181 net.cpp:156] Memory required for data: 8085376
I0526 14:49:57.456142 15181 layer_factory.hpp:77] Creating layer drop7x
I0526 14:49:57.456152 15181 net.cpp:91] Creating Layer drop7x
I0526 14:49:57.456159 15181 net.cpp:425] drop7x <- fc7x
I0526 14:49:57.456168 15181 net.cpp:386] drop7x -> fc7x (in-place)
I0526 14:49:57.456178 15181 net.cpp:141] Setting up drop7x
I0526 14:49:57.456187 15181 net.cpp:148] Top shape: 1 512 (512)
I0526 14:49:57.456194 15181 net.cpp:156] Memory required for data: 8087424
I0526 14:49:57.456202 15181 layer_factory.hpp:77] Creating layer fc8x
I0526 14:49:57.456212 15181 net.cpp:91] Creating Layer fc8x
I0526 14:49:57.456218 15181 net.cpp:425] fc8x <- fc7x
I0526 14:49:57.456229 15181 net.cpp:399] fc8x -> fc8x
I0526 14:49:57.456254 15181 net.cpp:141] Setting up fc8x
I0526 14:49:57.456270 15181 net.cpp:148] Top shape: 1 6 (6)
I0526 14:49:57.456281 15181 net.cpp:156] Memory required for data: 8087448
I0526 14:49:57.456311 15181 net.cpp:219] fc8x does not need backward computation.
I0526 14:49:57.456324 15181 net.cpp:219] drop7x does not need backward computation.
I0526 14:49:57.456332 15181 net.cpp:219] relu7x does not need backward computation.
I0526 14:49:57.456339 15181 net.cpp:219] fc7x does not need backward computation.
I0526 14:49:57.456347 15181 net.cpp:219] drop6x does not need backward computation.
I0526 14:49:57.456356 15181 net.cpp:219] relu6x does not need backward computation.
I0526 14:49:57.456362 15181 net.cpp:219] fc6x does not need backward computation.
I0526 14:49:57.456370 15181 net.cpp:219] pool5x does not need backward computation.
I0526 14:49:57.456378 15181 net.cpp:219] relu5x does not need backward computation.
I0526 14:49:57.456387 15181 net.cpp:219] conv5 does not need backward computation.
I0526 14:49:57.456394 15181 net.cpp:219] relu4 does not need backward computation.
I0526 14:49:57.456403 15181 net.cpp:219] conv4 does not need backward computation.
I0526 14:49:57.456410 15181 net.cpp:219] relu3 does not need backward computation.
I0526 14:49:57.456418 15181 net.cpp:219] conv3 does not need backward computation.
I0526 14:49:57.456426 15181 net.cpp:219] pool2 does not need backward computation.
I0526 14:49:57.456435 15181 net.cpp:219] norm2 does not need backward computation.
I0526 14:49:57.456444 15181 net.cpp:219] relu2 does not need backward computation.
I0526 14:49:57.456451 15181 net.cpp:219] conv2 does not need backward computation.
I0526 14:49:57.456467 15181 net.cpp:219] pool1 does not need backward computation.
I0526 14:49:57.456477 15181 net.cpp:219] norm1 does not need backward computation.
I0526 14:49:57.456485 15181 net.cpp:219] relu1 does not need backward computation.
I0526 14:49:57.456493 15181 net.cpp:219] conv1 does not need backward computation.
I0526 14:49:57.456501 15181 net.cpp:219] input does not need backward computation.
I0526 14:49:57.456508 15181 net.cpp:261] This network produces output fc8x
I0526 14:49:57.456539 15181 net.cpp:274] Network initialization done.
I0526 14:49:57.506186 15181 net.cpp:804] Ignoring source layer data
I0526 14:49:57.512642 15181 net.cpp:804] Ignoring source layer loss

Starting optimization with the following parameters:
FindParams:
blur_every: 4
blur_radius: 1.0
decay: 0.0001
lr_params: {'lr': 100.0}
lr_policy: constant
max_iter: 500
push_channel: 1
push_dir: 1
push_layer: conv1
push_spatial: (6, 6)
push_unit: (1, 6, 6)
px_abs_benefit_percentile: 0
px_benefit_percentile: 0
rand_seed: 0
small_norm_percentile: 0
small_val_percentile: 0
start_at: mean_plus_rand

0
push unit: (1, 6, 6) with value 0
Max idx: (1, 10, 40) with value 47.5277
X: -48.5211765318 42.8585564122 3869.55458723
grad: 0.0 0.0 0.0
Grad exactly 0, failed
Metaresult: grad 0 failure

opencv error

cv2.error: -------src-dir-------/opencv-2.4.10/modules/highgui/src/window.cpp:483: error: (-2) The function is not implemented.

where should I do the cmake..

Error importing settings.py.

Sorry guys my question might be very stupid but I am very new.
I did all the steps carefully. I dont know what I make mistake that I receive the following error :(
I appreciate if anyone can help me out please.
Thanks in advance.

OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvNamedWindow, file -------src-dir-------/opencv-2.4.10/modules/highgui/src/window.cpp, line 483
Traceback (most recent call last):
File "./run_toolbox.py", line 33, in
main()
File "./run_toolbox.py", line 28, in main
lv.run_loop()
File "/home/alireza/caffe/deep-visualization-toolbox/live_vis.py", line 121, in run_loop
self.init_window()
File "/home/alireza/caffe/deep-visualization-toolbox/live_vis.py", line 84, in init_window
cv2.namedWindow(self.window_name)
cv2.error: -------src-dir-------/opencv-2.4.10/modules/highgui/src/window.cpp:483: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvNamedWindow

does this a display tool or generate visualize image tool?

I've run this on my server, it occurs this problem:
(Deep Visualization Toolbox:32051): Gtk-WARNING **: cannot open display:

So I have a question here does this a simple display visualization images tool or generate visualization images tool for DL? I havn't go into details in the code.

AttributeError: 'str' object has no attribute 'nbytes'

I noticed this in the console log after I had been playing around with the tool for a while. Not sure if it's a big issue or not.

Exception in thread Thread-3:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/home/lyeager/code/deep-visualization-toolbox/caffevis/app.py", line 278, in run
    self.cache.set(jpgvis_to_load_key, img_resize)
  File "/home/lyeager/code/deep-visualization-toolbox/numpy_cache.py", line 29, in set
    self._store_bytes += self._store[key].nbytes
AttributeError: 'str' object has no attribute 'nbytes'

Can't visualize one channel input network

I'm trying to visualize my caffemodel which input data is grayscale image, and the toolbox crashed.
How can I fix it to accept one channel input models and convert input color images to grayscale images automatically?

Thanks a lot!

caffevis_unit_jpg_dir

I've gotten dvt working on my own models, and it's an incredibly useful tool, thanks!

What I haven't completed yet, is generating the data for the caffevis_unit_jpg_dir. I'm going to do my best to reverse engineer the file structure required by going through the dvt source code, but any documentation you could provide in that direction would be most appreciated by myself, and probably by future travellers down this path!

Linking errors when building pycaffe on caffe branch 'deconv-deep-vis-toolbox'

I am using OSX El Capitan 10.11.2 system. I followed caffe build step carefully, and on caffe master branch I could build libcaffe as well as pycaffe with no issue; also tried a couple of sample which seems to work well. However, on deconv-deep-vis-toolbox branch I always get following error:

Undefined symbols for architecture x86_64:
  "caffe::Net<float>::DeconvFromTo(int, int)", referenced from:
      caffe::init_module__caffe() in _caffe-c1a51b.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

I am using the same Makefile and configs on this branch. I am pasting the final compiler command if you could figure out what's wrong. I am still trying to debug what's causing it.

/usr/bin/clang++ -shared -o python/caffe/_caffe.so python/caffe/_caffe.cpp \
        -o python/caffe/_caffe.so -pthread -fPIC -DCAFFE_VERSION=1.0.0-rc3 -DGTEST_USE_OWN_TR1_TUPLE=1 -DNDEBUG -O2 -DUSE_OPENCV -DUSE_LEVELDB -DUSE_LMDB -DCPU_ONLY -I/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I/usr/local/lib/python2.7/site-packages/numpy/core/include -I/usr/local/include -I.build_release/src -I./src -I./include -I/System/Library/Frameworks/Accelerate.framework/Versions/Current/Frameworks/vecLib.framework/Headers/ -Wall -Wno-sign-compare -lcaffe -framework Accelerate -L/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib -L/usr/local/lib -L/usr/lib -L.build_release/lib  -lglog -lgflags -lprotobuf -lboost_system -lboost_filesystem -lm -lhdf5_hl -lhdf5 -lleveldb -lsnappy -llmdb -lopencv_core -lopencv_highgui -lopencv_imgproc -lboost_thread-mt -lcblas -lboost_python -lpython2.7 \
        -Wl,-rpath,@loader_path/../../build/lib
touch python/caffe/proto/__init__.py
echo PROTOC \(python\) src/caffe/proto/caffe.proto
protoc --proto_path=src/caffe/proto --python_out=python/caffe/proto src/caffe/proto/caffe.proto 

As I mentioned, the same command works on caffe master branch. I could build libcaffe as well as pycaffe. But, on deepvis branch pycaffe build fails with linking error mentioned above, so I believe there's some issue in previous step, i.e. building libcaffe.so. It might be missing some dependencies related to Deconvolution Layers.

Looking forward to some quick help. Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.