mikesj-public / convolutional_autoencoder Goto Github PK
View Code? Open in Web Editor NEWCode for a convolutional autoencoder written on python, theano, lasagne, nolearn
Code for a convolutional autoencoder written on python, theano, lasagne, nolearn
Hi,
I'm trying to write an interface/wrapper around your example convolutional autoencoder code, so that it's a little easier for someone who may to want to point this to other datasets (folder of images on disk), color images, or change some parameters of the network structure.
In the "Unpool2DLayer" class, I'm noticing that the "get_output_for()" function is using the "get_output_shape_for()" function properly to use "ds" in order to compute the new unpooled shape. However, the returned value seems to used fixed values of "2" rather than "ds"?
Existing code:
def get_output_for(self, input, **kwargs):
ds = self.ds
input_shape = input.shape
output_shape = self.get_output_shape_for(input_shape)
return input.repeat(2, axis=2).repeat(2, axis=3)
Shouldn't that last line be:
return input.repeat(self.ds[0], axis=2).repeat(self.ds[1], axis=3)
Am I correct, or am I not properly understanding the code?
Thanks.
I rewrote the network using the newer specifications of lasagne as follows:
conv_filters = 32
deconv_filters = 32
filter_sizes = 7
epochs = 20
encode_size = 40
l_in = layers.InputLayer((None, 1, 28, 28))
l_conv = layers.Conv2DLayer(l_in, num_filters=conv_filters, filter_size = (filter_sizes, filter_sizes), pad="valid", nonlinearity=None, W=init.GlorotUniform(), b=init.Constant(0.))
l_pool = layers.MaxPool2DLayer(l_conv, pool_size=(2,2))
l_flat = ReshapeLayer(l_pool,shape=(([0], -1)))
l_en = layers.DenseLayer(l_flat,num_units=encode_size)
l_hidd = layers.DenseLayer(l_en,num_units=deconv_filters * (28 + filter_sizes - 1) ** 2 / 4, W=init.GlorotUniform(), b=init.Constant(0.))
l_unflat = ReshapeLayer(l_hidd,shape=(([0], deconv_filters, (28 + filter_sizes - 1) / 2, (28 + filter_sizes - 1) / 2 )))
l_unpool = Unpool2DLayer(l_unflat, ds=(2,2))
l_deconv = layers.Conv2DLayer(l_unpool, num_filters=1, filter_size=(filter_sizes, filter_sizes), pad="valid", nonlinearity=None)
l_out = layers.ReshapeLayer(l_deconv, shape=(([0], -1)))
ae = NeuralNet(layers=[('input', l_in), ('conv', l_conv), ('pool', l_pool), ('flatten', l_flat),
('encode_layer', l_en), ('hidden', l_hidd), ('unflatten', l_unflat),
('unpool', l_unpool), ('deconv', l_deconv), ('output_layer',l_out)],update_learning_rate = 0.01,
update_momentum = 0.975, batch_iterator_train=FlipBatchIterator(batch_size=128), regression=True,
max_epochs= epochs, verbose=1)
ae.fit(X_train, X_out)
but I got the following error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-122-fcd713c0425c> in <module>()
58 # print prediction
59
---> 60 ae.fit(X_train, X_out, epochs=None)
61 print 'done'
62 ### expect training / val error of about 0.087 with these parameters
/home/tarek/Libraries/anaconda/lib/python2.7/site-packages/nolearn/lasagne/base.pyc in fit(self, X, y, epochs)
443 y = self.enc_.fit_transform(y).astype(np.int32)
444 self.classes_ = self.enc_.classes_
--> 445 self.initialize()
446
447 try:
/home/tarek/Libraries/anaconda/lib/python2.7/site-packages/nolearn/lasagne/base.pyc in initialize(self)
284 out = getattr(self, '_output_layer', None)
285 if out is None:
--> 286 out = self._output_layer = self.initialize_layers()
287 self._check_for_unused_kwargs()
288
/home/tarek/Libraries/anaconda/lib/python2.7/site-packages/nolearn/lasagne/base.pyc in initialize_layers(self, layers)
359 # assumed to require an 'incoming' paramter. By default,
360 # we'll use the previous layer as input:
--> 361 if not issubclass(layer_factory, InputLayer):
362 if 'incoming' in layer_kw:
363 layer_kw['incoming'] = self.layers_[
TypeError: issubclass() arg 1 must be a class
any ideas?
I'm attempting to install the requirements to run this code and get the following error:
Could not find a version that satisfies the requirement Lasagne==0.2.dev1 (from -r requirements.txt (line 3)) (from versions: 0.1)
No matching distribution found for Lasagne==0.2.dev1 (from -r requirements.txt (line 3))
I'm assuming the requirements file is out of date. Could you update it please?
Edit:
I was using pip3 instead of pip2, nevertheless I get following error:
Could not find a version that satisfies the requirement nolearn==0.6a0.dev0 (from -r requirements.txt (line 6)) (from versions: 0.1b1, 0.2b1, 0.2, 0.3, 0.3.1, 0.4, 0.5b1, 0.5)
No matching distribution found for nolearn==0.6a0.dev0 (from -r requirements.txt (line 6))
Hi!
Great post about auto encoder!
I had this issue when trying to run the code in ipython note book:
(BTW, I'm pretty new to python and libraries you used so the error might be a result of fault installation)
c:\anaconda\lib\lasagne-master\lasagne\init.py:86: UserWarning: The uniform initializer no longer uses Glorot et al.'s approach to determine the bounds, but defaults to the range (-0.01, 0.01) instead. Please use the new GlorotUniform initializer to get the old behavior. GlorotUniform is now the default for all layers.
warnings.warn("The uniform initializer no longer uses Glorot et al.'s "
TypeError Traceback (most recent call last)
in ()
39 verbose=1,
40 )
---> 41 ae.fit(X_train, X_out)
C:\Anaconda\lib\site-packages\nolearn\lasagne.pyc in fit(self, X, y)
136 out = getattr(self, '_output_layer', None)
137 if out is None:
--> 138 out = self._output_layer = self.initialize_layers()
139 if self.verbose:
140 self._print_layer_info(self.get_all_layers())
C:\Anaconda\lib\site-packages\nolearn\lasagne.pyc in initialize_layers(self, layers)
367 for (layer_name, layer_factory) in self.layers[1:]:
368 layer_params = self._get_params_for(layer_name)
--> 369 layer = layer_factory(layer, **layer_params)
370
371 self._output_layer = layer
TypeError: init() takes at least 3 arguments (2 given)
Hi,
I'm getting an error: "TypeError: cost must be a scalar." in line 432 of gradient.py in theano.
I'm using Anaconda with Python 2.7, Lasagne 0.1.dev0, scikit-learn 0.15.2, Theano 0.7, "nolearn-0.6a0.dev0".
Can you specify the versions of Theano, Lasagne, and nolearn that work with your code?
python mnist_conv_autoencode.py
Using gpu device 0: Tesla K40m
/usr/local/anaconda/lib/python2.7/site-packages/Lasagne-0.1.dev0-py2.7.egg/lasagne/init.py:86: UserWarning: The uniform initializer no longer uses Glorot e.'s approach to determine the bounds, but defaults to the range (-0.01, 0.01) instead. Please use the new GlorotUniform initializer to get the old behaviororotUniform is now the default for all layers.
warnings.warn("The uniform initializer no longer uses Glorot et al.'s "
/usr/local/anaconda/lib/python2.7/site-packages/Lasagne-0.1.dev0-py2.7.egg/lasagne/layers/helper.py:69: UserWarning: get_all_layers() has been changed to rn layers in topological order. The former implementation is still available as get_all_layers_old(), but will be removed before the first release of Lasagno ignore this warning, use warnings.filterwarnings('ignore', '.*topo.*')
.
warnings.warn("get_all_layers() has been changed to return layers in "
/usr/local/anaconda/lib/python2.7/site-packages/nolearn/lasagne.py:376: UserWarning: layer.get_output_shape() is deprecated and will be removed for the firelease of Lasagne. Please use layer.output_shape instead.
output_shape = layer.get_output_shape()
ReshapeLayer (None, 784) produces 784 outputs
Conv2DLayer (None, 1, 28, 28) produces 784 outputs
Unpool2DLayer (None, 32, 34, 34) produces 36992 outputs
ReshapeLayer (None, 32, 17, 17) produces 9248 outputs
DenseLayer (None, 9248) produces 9248 outputs
DenseLayer (None, 40) produces 40 outputs
ReshapeLayer (None, 3872) produces 3872 outputs
MaxPool2DLayer (None, 32, 11, 11) produces 3872 outputs
Conv2DLayer (None, 32, 22, 22) produces 15488 outputs
InputLayer (None, 1, 28, 28) produces 784 outputs
/usr/local/anaconda/lib/python2.7/site-packages/nolearn/lasagne.py:283: UserWarning: layer.get_output(...) is deprecated and will be removed for the first ase of Lasagne. Please use lasagne.layers.get_output(layer, ...) instead.
output_layer.get_output(X_batch), y_batch)
/usr/local/anaconda/lib/python2.7/site-packages/nolearn/lasagne.py:285: UserWarning: layer.get_output(...) is deprecated and will be removed for the first ase of Lasagne. Please use lasagne.layers.get_output(layer, ...) instead.
output_layer.get_output(X_batch, deterministic=True), y_batch)
/usr/local/anaconda/lib/python2.7/site-packages/nolearn/lasagne.py:286: UserWarning: layer.get_output(...) is deprecated and will be removed for the first ase of Lasagne. Please use lasagne.layers.get_output(layer, ...) instead.
predict_proba = output_layer.get_output(X_batch, deterministic=True)
Traceback (most recent call last):
File "mnist_conv_autoencode.py", line 158, in
ae.fit(X_train, X_out)
File "/usr/local/anaconda/lib/python2.7/site-packages/nolearn/lasagne.py", line 145, in fit
self.y_tensor_type,
File "/usr/local/anaconda/lib/python2.7/site-packages/nolearn/lasagne.py", line 295, in _create_iter_funcs
updates = update(loss_train, all_params, **update_params)
File "/usr/local/anaconda/lib/python2.7/site-packages/Lasagne-0.1.dev0-py2.7.egg/lasagne/updates.py", line 324, in nesterov_momentum
updates = sgd(loss_or_grads, params, learning_rate)
File "/usr/local/anaconda/lib/python2.7/site-packages/Lasagne-0.1.dev0-py2.7.egg/lasagne/updates.py", line 134, in sgd
grads = get_or_compute_grads(loss_or_grads, params)
File "/usr/local/anaconda/lib/python2.7/site-packages/Lasagne-0.1.dev0-py2.7.egg/lasagne/updates.py", line 110, in get_or_compute_grads
return theano.grad(loss_or_grads, params)
File "/usr/local/anaconda/lib/python2.7/site-packages/theano/gradient.py", line 432, in grad
raise TypeError("cost must be a scalar.")
TypeError: cost must be a scalar.
Hi, sry for the dump question but on your blog you are talking about counting blobs. Which part of your code is doing that?
Greetings
Hi,
I tried your sample with, I think, latest versions of each prerequisites and got the following after fitting network correctly :
"""
(50000, 28, 28) (50000, 1, 28, 28)
2
Traceback (most recent call last):
File "conv_autoencoder2.py", line 195, in
X_encoded = encode_input(encode_layer, X)
File "conv_autoencoder2.py", line 193, in encode_input
return get_output(encode_layer, inputs=X).eval()
File "/usr/local/lib/python2.7/dist-packages/theano/gof/graph.py", line 523, in eval
rval = self._fn_cacheinputs
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 871, in call
storage_map=getattr(self.fn, 'storage_map', None))
File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 314, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 859, in call
outputs = self.fn()
ValueError: CorrMM received weight with wrong type.
Apply node that caused the error: CorrMM{(0, 0), (1, 1)}(TensorConstant{[[[[ 0. 0..0. 0.]]]]}, Subtensor{::, ::, ::int64, ::int64}.0)
Toposort index: 8
Inputs types: [TensorType(float32, (False, True, False, False)), TensorType(float64, 4D)]
Inputs shapes: [(50000, 1, 28, 28), (16, 1, 3, 3)]
Inputs strides: [(3136, 3136, 112, 4), (72, 72, -24, -8)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[Elemwise{Composite{(i0 * (Abs((i1 + i2)) + i1 + i2))}}(TensorConstant{(1, 1, 1, 1) of 0.5}, CorrMM{(0, 0), (1, 1)}.0, InplaceDimShuffle{x,0,x,x}.0)]]
HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
"""
Did I miss something somewhere ?
Thanks.
Does this convolution use tied weights (for the encoder and decoder layers)? If so, is there a way to untie the weights? I'm curious about how to tie and untie the weights of the autoencoder and experiment with both.
Hi!
Great post about auto encoder!
I had this issue when trying to run the code in ipython note book:
(BTW, I'm pretty new to python and libraries you used so the error might be a result of fault installation)
c:\anaconda\lib\lasagne-master\lasagne\init.py:86: UserWarning: The uniform initializer no longer uses Glorot et al.'s approach to determine the bounds, but defaults to the range (-0.01, 0.01) instead. Please use the new GlorotUniform initializer to get the old behavior. GlorotUniform is now the default for all layers.
TypeError Traceback (most recent call last)
in ()
39 verbose=1,
40 )
---> 41 ae.fit(X_train, X_out)
C:\Anaconda\lib\site-packages\nolearn\lasagne.pyc in fit(self, X, y)
136 out = getattr(self, '_output_layer', None)
137 if out is None:
--> 138 out = self._output_layer = self.initialize_layers()
139 if self.verbose:
140 self._print_layer_info(self.get_all_layers())
C:\Anaconda\lib\site-packages\nolearn\lasagne.pyc in initialize_layers(self, layers)
367 for (layer_name, layer_factory) in self.layers[1:]:
368 layer_params = self._get_params_for(layer_name)
--> 369 layer = layer_factory(layer, **layer_params)
370
371 self._output_layer = layer
TypeError: init() takes at least 3 arguments (2 given)
Installing requirements as follows:
git clone https://github.com/mikesj-public/convolutional_autoencoder/
cd convolutional_autoencoder
conda env create convae anaconda
# say yes to prompt
source activate convae
cp requirements.txt all_requirements.txt
# REMOVE NOLEARN, MATPLOTLIB, AND LASAGNE FROM requirements.txt
pip install -r requirements.txt
pip install matplotlib
pip install --upgrade https://github.com/Lasagne/Lasagne/archive/master.zip
pip install --upgrade git+https://github.com/dnouri/nolearn.git@master#egg=nolearn==0.6a0.dev0
I assume this installs versions as they are stated in requirements.txt. Unfortunately this leads to the following error on import i.e. executing cell 3 of the ipynb.
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-3-8d39d889452d> in <module>()
----> 1 from lasagne.layers import get_output, InputLayer, DenseLayer, Upscale2DLayer, ReshapeLayer
2 from lasagne.nonlinearities import rectify, leaky_rectify, tanh
3 from lasagne.updates import nesterov_momentum
4 from lasagne.objectives import categorical_crossentropy
5 from nolearn.lasagne import NeuralNet, BatchIterator, PrintLayerInfo
/home/james/anaconda2/envs/convae/lib/python2.7/site-packages/lasagne/__init__.py in <module>()
17 import theano.tensor.signal.pool
18 except ImportError: # pragma: no cover
---> 19 raise ImportError("Your Theano version is too old." + install_instr)
20 del install_instr
21 del theano
ImportError: Your Theano version is too old.
Please make sure you install a recent enough version of Theano. Note that a
simple 'pip install theano' will usually give you a version that is too old
for Lasagne. See the installation docs for more details:
http://lasagne.readthedocs.org/en/latest/user/installation.html#theano
Just to confirm that everything is as expected - here is the output of my conda list
:
$ conda list
# packages in environment at /home/james/anaconda2/envs/convae:
#
alabaster 0.7.7 py27_0
anaconda 4.0.0 np110py27_0
anaconda-client 1.4.0 py27_0
anaconda-navigator 1.1.0 py27_0
argcomplete 1.0.0 py27_1
astropy 1.1.2 np110py27_0
babel 2.2.0 py27_0
backports-abc 0.4 <pip>
backports.ssl-match-hostname 3.4.0.2 <pip>
backports_abc 0.4 py27_0
beautifulsoup4 4.4.1 py27_0
bitarray 0.8.1 py27_0
blaze 0.9.1 py27_0
bokeh 0.11.1 py27_0
boto 2.39.0 py27_0
bottleneck 1.0.0 np110py27_0
cairo 1.12.18 6
cdecimal 2.3 py27_0
cffi 1.5.2 py27_0
chest 0.2.3 py27_0
cloudpickle 0.1.1 py27_0
clyent 1.2.1 py27_0
colorama 0.3.7 py27_0
conda-manager 0.3.1 py27_0
configobj 5.0.6 py27_0
cryptography 1.3 py27_0
curl 7.45.0 0
cycler 0.10.0 py27_0
cython 0.23.4 py27_0
cytoolz 0.7.5 py27_0
dask 0.8.1 py27_0
datashape 0.5.1 py27_0
decorator 4.0.9 py27_0
dill 0.2.4 py27_0
docutils 0.12 py27_0
dynd 0.7.3.dev1 <pip>
dynd-python 0.7.2 py27_0
enum34 1.1.2 py27_0
et-xmlfile 1.0.1 <pip>
et_xmlfile 1.0.1 py27_0
fastcache 1.0.2 py27_0
flask 0.10.1 py27_1
flask-cors 2.1.2 py27_0
fontconfig 2.11.1 5
freetype 2.5.5 0
funcsigs 0.4 py27_0
futures 3.0.3 py27_0
gdbn 0.1 <pip>
gevent 1.1.0 py27_0
gnumpy 0.2 <pip>
greenlet 0.4.9 py27_0
grin 1.2.1 py27_1
h5py 2.5.0 np110py27_4
hdf5 1.8.15.1 2
heapdict 1.0.0 py27_0
idna 2.0 py27_0
ipaddress 1.0.14 py27_0
ipykernel 4.3.1 py27_0
ipython 4.1.2 py27_1
ipython-genutils 0.1.0 <pip>
ipython_genutils 0.1.0 py27_0
ipywidgets 4.1.1 py27_0
itsdangerous 0.24 py27_0
jbig 2.1 0
jdcal 1.2 py27_0
jedi 0.9.0 py27_0
jinja2 2.8 py27_0
joblib 0.9.4 <pip>
jpeg 8d 0
jsonschema 2.4.0 py27_0
jupyter 1.0.0 py27_2
jupyter-client 4.2.2 <pip>
jupyter-console 4.1.1 <pip>
jupyter-core 4.1.0 <pip>
jupyter_client 4.2.2 py27_0
jupyter_console 4.1.1 py27_0
jupyter_core 4.1.0 py27_0
lasagne 0.2.dev1 <pip>
libdynd 0.7.2 0
libffi 3.0.13 0
libgfortran 3.0 0
libpng 1.6.17 0
libsodium 1.0.3 0
libtiff 4.0.6 1
libxml2 2.9.2 0
libxslt 1.1.28 0
llvmlite 0.9.0 py27_0
locket 0.2.0 py27_0
lxml 3.6.0 py27_0
markupsafe 0.23 py27_0
matplotlib 1.5.1 np110py27_0
mistune 0.7.2 py27_0
mkl 11.3.1 0
mkl-service 1.1.2 py27_0
mock 1.3.0 <pip>
mpmath 0.19 py27_0
multipledispatch 0.4.8 py27_0
nbconvert 4.1.0 py27_0
nbformat 4.0.1 py27_0
networkx 1.11 py27_0
nltk 3.2 py27_0
nolearn 0.6a0.dev0 <pip>
nose 1.3.7 py27_0
notebook 4.1.0 py27_1
numba 0.24.0 np110py27_0
numexpr 2.5 np110py27_0
numpy 1.10.4 py27_1
odo 0.4.2 py27_0
openpyxl 2.3.2 py27_0
openssl 1.0.2g 0
pandas 0.18.0 np110py27_0
partd 0.3.2 py27_1
patchelf 0.8 0
path.py 8.1.2 py27_1
patsy 0.4.0 np110py27_0
pbr 1.8.0 <pip>
pep8 1.7.0 py27_0
pexpect 4.0.1 py27_0
pickleshare 0.5 py27_0
pillow 3.1.1 py27_0
pip 8.1.1 py27_1
pixman 0.32.6 0
ply 3.8 py27_0
psutil 4.1.0 py27_0
ptyprocess 0.5 py27_0
py 1.4.31 py27_0
pyasn1 0.1.9 py27_0
pycairo 1.10.0 py27_0
pycosat 0.6.1 py27_0
pycparser 2.14 py27_0
pycrypto 2.6.1 py27_0
pycurl 7.19.5.3 py27_0
pyflakes 1.1.0 py27_0
pygments 2.1.1 py27_0
pyopenssl 0.15.1 py27_2
pyparsing 2.0.3 py27_0
pyqt 4.11.4 py27_1
pytables 3.2.2 np110py27_1
pytest 2.8.5 py27_0
python 2.7.11 0
python-dateutil 2.5.1 py27_0
pytz 2016.2 py27_0
pyyaml 3.11 py27_1
pyzmq 15.2.0 py27_0
qt 4.8.7 1
qtawesome 0.3.2 py27_0
qtconsole 4.2.0 py27_0
qtpy 1.0 py27_0
readline 6.2 2
redis 2.6.9 0
redis-py 2.10.3 py27_0
requests 2.9.1 py27_0
rope 0.9.4 py27_1
scikit-image 0.12.3 np110py27_0
scikit-learn 0.17.1 np110py27_0
scipy 0.17.0 np110py27_2
setuptools 20.3 py27_0
simplegeneric 0.8.1 py27_0
singledispatch 3.4.0.3 py27_0
sip 4.16.9 py27_0
six 1.10.0 py27_0
snowballstemmer 1.2.1 py27_0
sockjs-tornado 1.0.1 py27_0
sphinx 1.3.5 py27_0
sphinx-rtd-theme 0.1.9 <pip>
sphinx_rtd_theme 0.1.9 py27_0
spyder 2.3.8 py27_1
sqlalchemy 1.0.12 py27_0
sqlite 3.9.2 0
ssl_match_hostname 3.4.0.2 py27_0
statsmodels 0.6.1 np110py27_0
sympy 1.0 py27_0
tables 3.2.2 <pip>
tabulate 0.7.5 <pip>
terminado 0.5 py27_1
theano 0.8.0 <pip>
tk 8.5.18 0
toolz 0.7.4 py27_0
tornado 4.3 py27_0
traitlets 4.2.1 py27_0
unicodecsv 0.14.1 py27_0
util-linux 2.21 0
werkzeug 0.11.4 py27_0
wheel 0.29.0 py27_0
xlrd 0.9.4 py27_0
xlsxwriter 0.8.4 py27_0
xlwt 1.0.0 py27_0
xz 5.0.5 1
yaml 0.1.6 0
zeromq 4.1.3 0
zlib 1.2.8 0
I have got this error
(50000, 28, 28) (50000, 1, 28, 28)
2
Traceback (most recent call last):
File "conv_autoencoder2.py", line 195, in
X_encoded = encode_input(encode_layer, X)
File "conv_autoencoder2.py", line 193, in encode_input
return get_output(encode_layer, inputs=X).eval()
File "/usr/local/lib/python2.7/dist-packages/theano/gof/graph.py", line 523, in eval
rval = self.fn_cacheinputs
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 871, in _call
storage_map=getattr(self.fn, 'storage_map', None))
File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 314, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 859, in call
outputs = self.fn()
ValueError: CorrMM received weight with wrong type.
Apply node that caused the error: CorrMM{(0, 0), (1, 1)}(TensorConstant{[[[[ 0. 0..0. 0.]]]]}, Subtensor{::, ::, ::int64, ::int64}.0)
Toposort index: 8
Inputs types: [TensorType(float32, (False, True, False, False)), TensorType(float64, 4D)]
Inputs shapes: [(50000, 1, 28, 28), (16, 1, 3, 3)]
Inputs strides: [(3136, 3136, 112, 4), (72, 72, -24, -8)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[Elemwise{Composite{(i0 * (Abs((i1 + i2)) + i1 + i2))}}(TensorConstant{(1, 1, 1, 1) of 0.5}, CorrMM{(0, 0), (1, 1)}.0, InplaceDimShuffle{x,0,x,x}.0)]]
HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
"""
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.