jacobgil / keras-dcgan Goto Github PK
View Code? Open in Web Editor NEWKeras implementation of Deep Convolutional Generative Adversarial Networks
Keras implementation of Deep Convolutional Generative Adversarial Networks
I kept getting an error " Python error 'int' object has no attribute 'shape'" reflecting to the following line from def train in the dcgan.py file
g_loss = d_on_g.train_on_batch(noise, [1] * BATCH_SIZE))
Solution:
Changing the line to the below worked well:
g_loss = d_on_g.train_on_batch(noise,np.array([1] * BATCH_SIZE))
Correct me if this is wrong!
Hope this is of some help!
Thanks,
Mritula
is there a reason you sampled from an uniform distribution (as opposed to a normal distribution)? just wondering if this was intentional or not!
OSError: Unable to open file (unable to open file: name = 'generator'
so I cant't find the weight in your code line 121 : g.load_weights('generator')
Keras2.1.3
Python 2.7
Thenao 1.0.1
root@localhost:~/zya/keras-dcgan# python dcgan.py --mode train
/usr/local/lib/python2.7/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using Theano backend.
dcgan.py:19: UserWarning: Update your `Dense` call to the Keras 2 API: `Dense(units=1024, input_dim=100)`
model.add(Dense(input_dim=100, output_dim=1024))
('Epoch is', 0)
('Number of batches', 468)
Traceback (most recent call last):
File "dcgan.py", line 160, in <module>
train(BATCH_SIZE=args.batch_size)
File "dcgan.py", line 106, in train
d_loss = d.train_on_batch(X, y)
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 1069, in train_on_batch
class_weight=class_weight)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1843, in train_on_batch
check_batch_axis=True)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1430, in _standardize_user_data
exception_prefix='target')
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 70, in _standardize_input_data
data = [np.expand_dims(x, 1) if x is not None and x.ndim == 1 else x for x in data]
AttributeError: 'int' object has no attribute 'ndim'
When I ran this, after 4 epochs G appeared to be farther and farther from converging (currently g_loss~=3) while D does appear to be converging slowly. Can you post a tensorboard screenshot of the convergence behavior?
I get an Assertion Error when compiling the model, haven't changed anything in the code.
Do you know that the problem might be?
Traceback (most recent call last):
File "dcgan.py", line 155, in <module>
train(path = args.path, BATCH_SIZE = args.batch_size)
File "dcgan.py", line 87, in train
discriminator_on_generator.compile(loss='binary_crossentropy', optimizer=adam)
File "/usr/local/lib/python3.4/dist-packages/keras/models.py", line 467, in compile
self.y_train = self.get_output(train=True)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/containers.py", line 128, in get_output
return self.layers[-1].get_output(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/containers.py", line 128, in get_output
return self.layers[-1].get_output(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 679, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 970, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 842, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/advanced_activations.py", line 28, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/normalization.py", line 71, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/convolutional.py", line 312, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/advanced_activations.py", line 28, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/normalization.py", line 71, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/convolutional.py", line 312, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/advanced_activations.py", line 28, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/normalization.py", line 71, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/convolutional.py", line 312, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/normalization.py", line 71, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/advanced_activations.py", line 28, in get_output
X = self.get_input(train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/core.py", line 175, in get_input
previous_output = self.previous.get_output(train=train)
File "/usr/local/lib/python3.4/dist-packages/keras/layers/convolutional.py", line 317, in get_output
filter_shape=self.W_shape)
File "/usr/local/lib/python3.4/dist-packages/keras/backend/theano_backend.py", line 596, in conv2d
assert(strides == (1, 1))
AssertionError
I trained the model on various datasets having more then 20k images, but even after several epochs i'm not getting the desired results.
Can I know the dataset on which this model has been trained and tested?
Thank You
the related paper all say they use Conv2DTranspose
The line discriminator.trainable = False
does not stop the discriminator from learning. Replace that types of line with a call at the following function:
def make_trainable(net, val):
net.trainable = val
for l in net.layers:
l.trainable = val
Hi,
Is it possible to train on our own dataset? if yes then how ? I am using this command python dcgan.py --mode train --path ~/images --batch_size 128
after placing images in the /images folder but it throw the error as there is no argument --path , I also checked your python file it does not include --path argument.
To me it seems that the architecture in the referenced paper is quite different from the one that is implemented right now.
In particular:
BTW: I really like this initiative (looking to use adversarial nets within keras quite soon.)
In the
def train(BATCH_SIZE):
discriminator_on_generator = \ generator_containing_discriminator(generator, discriminator)
what is discriminator_on_generator
?Is it a model?
When I try to run: python dcgan.py --mode train --batch_size 100
I get the following:
Using TensorFlow backend.
dcgan.py:41: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(64, (5, 5), input_shape=(1, 28, 28..., padding="same")`
input_shape=(1, 28, 28)))
Traceback (most recent call last):
File "/home/marija/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py", line 671, in _call_cpp_shape_fn_impl
input_tensors_as_shapes, status)
File "/home/marija/anaconda3/envs/tensorflow-gpu-3.5/lib/python3.5/contextlib.py", line 66, in __exit__
next(self.gen)
File "/home/marija/.local/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_1/MaxPool' (op: 'MaxPool') with input shapes: [?,1,28,64].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "dcgan.py", line 169, in <module>
train(BATCH_SIZE=args.batch_size)
File "dcgan.py", line 82, in train
discriminator = discriminator_model()
File "dcgan.py", line 43, in discriminator_model
model.add(MaxPooling2D(pool_size=(2, 2)))
File "/home/marija/.local/lib/python3.5/site-packages/keras/models.py", line 466, in add
output_tensor = layer(self.outputs[0])
File "/home/marija/.local/lib/python3.5/site-packages/keras/engine/topology.py", line 585, in __call__
output = self.call(inputs, **kwargs)
File "/home/marija/.local/lib/python3.5/site-packages/keras/layers/pooling.py", line 154, in call
data_format=self.data_format)
File "/home/marija/.local/lib/python3.5/site-packages/keras/layers/pooling.py", line 217, in _pooling_function
pool_mode='max')
File "/home/marija/.local/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 3245, in pool2d
x = tf.nn.max_pool(x, pool_size, strides, padding=padding)
File "/home/marija/.local/lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py", line 1821, in max_pool
name=name)
File "/home/marija/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 1638, in _max_pool
data_format=data_format, name=name)
File "/home/marija/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
op_def=op_def)
File "/home/marija/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2338, in create_op
set_shapes_for_outputs(ret)
File "/home/marija/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1719, in set_shapes_for_outputs
shapes = shape_func(op)
File "/home/marija/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1669, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/home/marija/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn
debug_python_shape_fn, require_shape_fn)
File "/home/marija/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py", line 676, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_1/MaxPool' (op: 'MaxPool') with input shapes: [?,1,28,64].
Any ideas what is going wrong here?
I am using tensorflow backend and updated the keras.json file as -
{
"floatx": "float32",
"epsilon": 1e-07,
"backend": "tensorflow",
"image_data_format": "channels_last"
"image_dim_ordering": "th"
}
This is the error that i am getting -
Using TensorFlow backend.
test2.py:31: UserWarning: Update your `Dense` call to the Keras 2 API: `Dense(input_dim=100, units=1024)`
model.add(Dense(input_dim=100, output_dim=1024))
Traceback (most recent call last):
File "test2.py", line 177, in <module>
train(BATCH_SIZE=args.batch_size)
File "test2.py", line 102, in train
d_on_g = generator_containing_discriminator(g, d)
File "test2.py", line 71, in generator_containing_discriminator
model.add(g)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\models.py", line 467, in add
layer(x)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 619, in __call__
output = self.call(inputs, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\models.py", line 549, in call
return self.model.call(inputs, mask)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 2085, in call
output_tensors, _, _ = self.run_internal_graph(inputs, masks)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 2236, in run_internal_graph
output_tensors = _to_list(layer.call(computed_tensor, **kwargs))
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\layers\normalization.py", line 193, in call
self.momentum),
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 1004, in moving_average_up
date
x, value, momentum, zero_debias=True)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\training\moving_averages.py", line 70, in assign_mo
ving_average
update_delta = _zero_debias(variable, value, decay)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\training\moving_averages.py", line 180, in _zero_de
bias
"biased", initializer=biased_initializer, trainable=False)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1065, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 962, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 367, in get_variable
validate_shape=validate_shape, use_resource=use_resource)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 352, in _true_getter
use_resource=use_resource)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 664, in _get_single_va
riable
name, "".join(traceback.format_list(tb))))
ValueError: Variable batch_normalization_1/moving_mean/biased already exists, disallowed. Did you mean to set reuse=True
in VarScope? Originally defined at:
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1204, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2630, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
op_def=op_def)
Any idea what i might be doing wrong?
When I type python dcgan.py --mode train --batch_size 32, I will see this
File "dcgan.py", line 167, in
train(BATCH_SIZE=args.batch_size)
File "dcgan.py", line 80, in train
discriminator = discriminator_model()
File "dcgan.py", line 41, in discriminator_model
model.add(MaxPooling2D(pool_size=(2, 2)))
File "/usr/local/lib/python2.7/site-packages/keras/models.py", line 312, in add
output_tensor = layer(self.outputs[0])
File "/usr/local/lib/python2.7/site-packages/keras/engine/topology.py", line 514, in call
self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
File "/usr/local/lib/python2.7/site-packages/keras/engine/topology.py", line 572, in add_inbound_node
Node.create_node(self, inbound_layers, node_indices, tensor_indices)
File "/usr/local/lib/python2.7/site-packages/keras/engine/topology.py", line 149, in create_node
output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
File "/usr/local/lib/python2.7/site-packages/keras/layers/pooling.py", line 162, in call
dim_ordering=self.dim_ordering)
File "/usr/local/lib/python2.7/site-packages/keras/layers/pooling.py", line 212, in _pooling_function
border_mode, dim_ordering, pool_mode='max')
File "/usr/local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 1761, in pool2d
x = tf.nn.max_pool(x, pool_size, strides, padding=padding)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 850, in max_pool
name=name)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 1440, in _max_pool
data_format=data_format, name=name)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 749, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2382, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1783, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 596, in call_cpp_shape_fn
raise ValueError(err.message)
ValueError: Negative dimension size caused by subtracting 2 from 1
I wonder how to fix it.
So, that. I get the following error:
theano.gof.fg.MissingInputError: ("An input of the graph, used to compute DimShuffle{0,1,2,x}(convolution1d_input_1), was not provided and not given a value.Use the Theano flag exception_verbosity='high',for more information on this error.", convolution1d_input_1)
I dug around and found that when training the whole thing, the discriminator had .trainable=True. So, even though it was compiled with False, when you turn it on afterwards it becomes trainable, and then the Theano graph is not well defined.
I was able to fix this by setting discriminator.trainable=True and recompiling it before its training, and afterwards setting it to False and recompiling the discriminator_on_generator.
Now I need to understand why the discriminator converges very fast while the generator diverges. Any ideas on this? I am not using images, but midi files expressed as vector sequences.
Hello.
In dcgan.py
generator_model(), the model uses tanh
for not only last layer but every layer.
Original paper said, it's recommended to use ReLU for every layer except the last one.
Same difference exsist in discriminator_model as well.
Do you have some specific reason that you build these differently?
Thank you for sharing your code, though.
Hello,
I tried to launch this project locally. I set my keras to use theano, python = 2.7; its ubuntu.
Traceback (most recent call last):
File "dcgun.py", line 167, in
train(BATCH_SIZE=args.batch_size)
File "dcgun.py", line 80, in train
discriminator = discriminator_model()
File "dcgun.py", line 46, in discriminator_model
model.add(Dense(1024))
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 312, in add
output_tensor = layer(self.outputs[0])
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 487, in call
self.build(input_shapes[0])
File "/usr/local/lib/python2.7/dist-packages/keras/layers/core.py", line 695, in build
name='{}_W'.format(self.name))
File "/usr/local/lib/python2.7/dist-packages/keras/initializations.py", line 59, in glorot_uniform
return uniform(shape, s, name=name)
File "/usr/local/lib/python2.7/dist-packages/keras/initializations.py", line 32, in uniform
return K.random_uniform_variable(shape, -scale, scale, name=name)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 140, in random_uniform_variable
return variable(np.random.uniform(low=low, high=high, size=shape),
File "mtrand.pyx", line 1565, in mtrand.RandomState.uniform (numpy/random/mtrand/mtrand.c:17319)
OverflowError: Range exceeds valid bounds
Hi,
I kept getting the below error when trying to execute dcgan.py file, when trying to execute in Colab
ImportError: cannot import name 'BatchNormalization' from 'keras.layers.normalization'
To over come this replace in the initial inclusions from line no 1 to 8 as below with Tensorflow
For line no 1 from tensorflow.keras.models import Sequential
For line no 2:8 replace it with from tensorflow.keras.layers import ( BatchNormalization, SeparableConv2D, MaxPooling2D, Activation, Flatten, Dropout, Dense, UpSampling2D,Reshape,Conv2D )
For line no 9 replace with from tensorflow.keras.optimizers import SGD
For line no 10 replace with from tensorflow.keras.datasets import mnist
Hope this helps!
Regards,
Mritula
since we're training discriminator_on_generator with noise, shouldnt the corresponding label data be a string of 0s instead of 1s?
for i in range(BATCH_SIZE):
noise[i, :] = np.random.uniform(-1, 1, 100)
g_loss = discriminator_on_generator.train_on_batch(noise, [1] * BATCH_SIZE) # shouldnt this be [0] * BATCH_SIZE?
thanks!
hi,what's the use of 127.5 in this code:
X_train = (X_train.astype(np.float32) - 127.5)/127.5
to scale the input datas? If yes ,than how do you calculate 127.5 , is it just match minist data?
In the paper GAN,the loss funtion has a form of minmaxV(G,D),while in your code the loss is 'binary_crossentropy',I'm little confused about this,could you please explain this a little bit?
It seems you can only change one model's trainable before model.compile, if you compile one model, your change of trainable will not work. So, why do you change model.trainable twice during train :
x= np.concatenate((image_batch, generated_images))
y = [1] * BATCH_SIZE + [0] * BATCH_SIZE
d_loss = d.train_on_batch(X, y)
print("batch %d d_loss : %f" % (index, d_loss))
noise = np.random.uniform(-1, 1, (BATCH_SIZE, 100))
d.trainable = False #'here'
g_loss = d_on_g.train_on_batch(noise, [1] * BATCH_SIZE)
d.trainable = True #'here'
print("batch %d g_loss : %f" % (index, g_loss))
python dcgan.py --mode train --batch_size
I am loading the mnist dataset from a local pickle file, and this error is occuring. Debugging now, and will update as I learn more
more
//anaconda/envs/py35/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.6 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5
return f(*args, **kwds)
//anaconda/envs/py35/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
//anaconda/envs/py35/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
//anaconda/envs/py35/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
Traceback (most recent call last):
File "DCGAN.py", line 167, in
train(BATCH_SIZE=args.batch_size)
File "DCGAN.py", line 91, in train
g = generator_model()
File "DCGAN.py", line 22, in generator_model
model.add(Dense(input_dim=100, output_dim=1024))
could it be my TF version? It shouldn't be...
Hi
In line 96
we are defining noise which is used to generate images and train Discriminator d
.
In line 108
we are again defining new noise which is used to train Generator d_on_g
Could you please explain why are we not using the noise defined in line 96
? Is it because if we use the same starting point to train d
and d_on_g
it may overfit?
Thanks
I cannot see the license on this repository. Please choose appropriate license if its okay for others to derive and use your wonderful work[1].
Thanks.
[1]. https://help.github.com/articles/open-source-licensing/#what-happens-if-i-dont-choose-a-license
@jacobgil @jusjusjus @jacobsagivtech @laoluzi How can use this code for another dataset or own dataset and how access the discriminator network as feature extractor of image ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.