Giter Site home page Giter Site logo

markovmodel / deeptime Goto Github PK

View Code? Open in Web Editor NEW
172.0 25.0 40.0 2.63 MB

Deep learning meets molecular dynamics.

License: GNU Lesser General Public License v3.0

Python 41.87% Jupyter Notebook 58.13%
pytorch tensorflow dimension-reduction markov-model data-analysis machine-learning autoencoder time-series computational-biology computational-chemistry python deep-learning

deeptime's Introduction

deeptime's People

Contributors

clonker avatar cwehmeyer avatar pasqualil avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deeptime's Issues

test_toymodel.py fails

When running python setup.py test, I receive failures for the toy models.
The installation works fine, but am curious whether these toy models are deprecated/ from an older version or if this would cause errors when using the module with my environment.

I have attached the output of python setup.py test &> test.log
test.log

What is the evaluation significance of model training?

I saw the model in compilation, metrics = [vamp. metric_VAMP, vamp. metric_VAMP2], and then I saw the training visualization as follows:
default

But I don't know how to analyze this graph, what does the X and Y axes mean, and how to judge whether the curve is good or bad?

Self-adjoint eigen decomposition was not successful. The input might not be valid.

I'm trying to train a VAMPnet on one of my MD protein-ligand datasets. I've tried using input as both pair-wise distance features and cartesian coordinates of alpha carbons from PBC-corrected trajectories. I'm currently using Tensorflow 1.8 and the hyperparameters:

tau = 10
batch_size = 1000
training_percentage = 75
network_depth = 6
layer_width = 16
learning_rate = 5e-4
output_size = 8
nb_epoch = 10
epsilon = 1e-5

I've also attached a notebook that I've edited a bit from the alanine_dipepetide_multiple_files notebook in the examples directory.
I've been able to build traditional MSMs and MEMMs with the data, so it should be valid.
VAMPnets.zip

example

in vampnet/examples/Alanine_dipeptide.ipynb
I think load has to be replaced with fetch

traj_whole, dihedral = vamp_data_generator.get_alanine_data()
---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
<ipython-input-5-75612c26cdc4> in <module>()
----> 1 traj_whole, dihedral = vamp_data_generator.get_alanine_data()
      2 
      3 traj_data_points, input_size = traj_whole.shape

xxx/lib/python3.6/site-packages/vampnet-0.1.4.dev2+gf2c6dd8.d20180829-py3.6.egg/vampnet/data_generator.py in get_alanine_data(input_type, return_dihedrals, number_files)
    183     elif input_type == 'coordinates':
    184 
--> 185         local_filename = mdshare.load('alanine-dipeptide-3x250ns-heavy-atom-positions.npz')
    186 
    187         traj_whole = np.load(local_filename)['arr_0']

xxx/lib/python3.6/site-packages/mdshare/__init__.py in load(*args, **kwargs)
     51 
     52 def load(*args, **kwargs):
---> 53     raise NotImplementedError('use fetch')

NotImplementedError: use fetch

What is the meaning of labels?

I see your data_generator function, but labels = np. empty ((input_data [0]. shape [0], 2 * output_size). Astype ('float32'), which is not real labels, is there any point in defining labels like this? So what is my model learning?
Can you give me some specific explanations? Thank you very much.

vamp._loss_VAMP_sym error

with vamp._loss_VAMP_sym
in line

hist = model.fit_generator(generator = vamp_data_loader.build_generator_on_source(train_data_source,
                                                      batch_size,
                                                      tau,
                                                      output_size),
                           steps_per_epoch = steps_per_train_epoch,
                           epochs = nb_epoch,
                           verbose = 0,
                           validation_data = vamp_data_loader.build_generator_on_source(valid_data_source,
                                                            batch_size,
                                                            tau,
                                                            output_size),
                           validation_steps = steps_per_valid_epoch,
                           shuffle = True
                          )

I get an fatal error:

---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-54-480bb4a81ec5> in <module>()
     28                                                                     output_size),
     29                                    validation_steps = steps_per_valid_epoch,
---> 30                                    shuffle = True
     31                                   )
     32 

/scratch1/eh22/conda/envs/extasy13/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name +
     90                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

/scratch1/eh22/conda/envs/extasy13/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   1413             use_multiprocessing=use_multiprocessing,
   1414             shuffle=shuffle,
-> 1415             initial_epoch=initial_epoch)
   1416 
   1417     @interfaces.legacy_generator_methods_support

/scratch1/eh22/conda/envs/extasy13/lib/python3.6/site-packages/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
    228                             val_enqueuer_gen,
    229                             validation_steps,
--> 230                             workers=0)
    231                     else:
    232                         # No need for try/except because

/scratch1/eh22/conda/envs/extasy13/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name +
     90                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

/scratch1/eh22/conda/envs/extasy13/lib/python3.6/site-packages/keras/engine/training.py in evaluate_generator(self, generator, steps, max_queue_size, workers, use_multiprocessing, verbose)
   1467             workers=workers,
   1468             use_multiprocessing=use_multiprocessing,
-> 1469             verbose=verbose)
   1470 
   1471     @interfaces.legacy_generator_methods_support

/scratch1/eh22/conda/envs/extasy13/lib/python3.6/site-packages/keras/engine/training_generator.py in evaluate_generator(model, generator, steps, max_queue_size, workers, use_multiprocessing, verbose)
    341                                  'or (x, y). Found: ' +
    342                                  str(generator_output))
--> 343             outs = model.test_on_batch(x, y, sample_weight=sample_weight)
    344             outs = to_list(outs)
    345             outs_per_batch.append(outs)

/scratch1/eh22/conda/envs/extasy13/lib/python3.6/site-packages/keras/engine/training.py in test_on_batch(self, x, y, sample_weight)
   1252             ins = x + y + sample_weights
   1253         self._make_test_function()
-> 1254         outputs = self.test_function(ins)
   1255         return unpack_singleton(outputs)
   1256 

/scratch1/eh22/conda/envs/extasy13/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs)
   2664                 return self._legacy_call(inputs)
   2665 
-> 2666             return self._call(inputs)
   2667         else:
   2668             if py_any(is_tensor(x) for x in inputs):

/scratch1/eh22/conda/envs/extasy13/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in _call(self, inputs)
   2634                                 symbol_vals,
   2635                                 session)
-> 2636         fetched = self._callable_fn(*array_vals)
   2637         return fetched[:len(self.outputs)]
   2638 

/scratch1/eh22/conda/envs/extasy13/lib/python3.6/site-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs)
   1380           ret = tf_session.TF_SessionRunCallable(
   1381               self._session._session, self._handle, args, status,
-> 1382               run_metadata_ptr)
   1383         if run_metadata:
   1384           proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/scratch1/eh22/conda/envs/extasy13/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py in __exit__(self, type_arg, value_arg, traceback_arg)
    517             None, None,
    518             compat.as_text(c_api.TF_Message(self.status.status)),
--> 519             c_api.TF_GetCode(self.status.status))
    520     # Delete the underlying status object from memory otherwise it stays alive
    521     # as there is a reference to status from this from the traceback due to

InvalidArgumentError: Got info = 2 for batch index 0, expected info = 0. Debug_info = heevd
   [[Node: metrics_4/metric_VAMP/SelfAdjointEigV2 = SelfAdjointEigV2[T=DT_FLOAT, compute_v=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](loss_4/concatenate_1_loss/mul_3)]]
   [[Node: loss_4/mul/_603 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_450_loss_4/mul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

vampnet its wierd dip

in vampnet/examples/Alanine_dipeptide.ipynb
the its

its = vamp.get_its(pred_ord, lag)
vamp.plot_its(its, lag)

gives a wierd dip of the third timescale around lagtime=80. What is causing it?
download

training very slow on GPU

Hi I am trying to reproduce your results in Alanine_dipeptide_multiple_files on a single NVIDIA GeForce GTX 1080 Ti GPU and it took ~ 5h to finish all 10 attempts. I was using tensorflow-gpu v1.9.0, cuda/9.0 and cudnn/7.0. As comparison, I also ran the jupyter-notebook on my laptop CPU and it was faster than GPU (~ 3h, but still very slow!). In the Nature Comm. paper, you mentioned that depending on the system, each run takes between 20s and 180s. Since I didn't change the code, I am wondering why there's such a big discrepancy in speed compared to the paper. Do you have any insight on why my training is so slow? Thanks!

(1.5 * r - 2.0) * rvec / rnorm in line 84 of vampnet/vampnet /data_generator.py

Isn't the gradient (1.5 * r - 2.0) * rvec / rnorm in line 84 of vampnet/vampnet /data_generator.py supposed to be the (1.5 * r**2 - 2.0*r) * rvec / rnorm according to the potential energy function of Protein-folding model "U(r) = 0.5(r-3)^3 - (r-3)^2 for r>3" of paper "VAMPnets for deep learning of molecular kinetics"?

memory warning

in the vampnet examples

hist = model.fit([X1_train, X2_train], Y_train ,
                         batch_size=batch_size,
                         epochs=nb_epoch,
                         validation_data=([X1_vali, X2_vali], Y_vali ),
                         verbose=0)

leads to

tensorflow/python/ops/gradients_impl.py:108: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "

It's just a warning, but might affect the performance

issue with clear_session() in ALA DP JUPYTER

Working through Alanine_dipeptide.ipynb

AT "Run several model iterations...." encountered a problem with tensorflow command:

clear_session()

Thoughts on this??


Diagnostic:

UnboundLocalError Traceback (most recent call last)
in
14 # Clear the previous tensorflow session to prevent memory leaks
15
---> 16 clear_session()
17
18 # Build the model

/anaconda2/envs/vampnet/lib/python3.7/site-packages/tensorflow/python/keras/backend.py in clear_session()
343 with graph.as_default():
344 phase = array_ops.placeholder_with_default(
--> 345 False, shape=(), name='keras_learning_phase')
346 _GRAPH_LEARNING_PHASES = {}
347 _GRAPH_LEARNING_PHASES[graph] = phase

/anaconda2/envs/vampnet/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py in placeholder_with_default(input, shape, name)
2091 A Tensor. Has the same type as input.
2092 """
-> 2093 return gen_array_ops.placeholder_with_default(input, shape, name)
2094

A question about whitening/centering transformations

Hi, it seems that the line 340 x = x.mm(self.mul) of deeptime/time-lagged-autoencoder/tae/utils.py should be x = self.mul.mm(x) according to Zero-phase Component Analysis Whitening formula Z=VD^(โˆ’1/2)V^TX

Installation fails

Hi Luca+Andreas. Installation fails for me. I don't know these packages that are required. Maybe some nonstandard stuff. Can you fix that?

panda:vampnet noe$ python setup.py install
Couldn't find index page for 'setuptools_scm_git_archive' (maybe misspelled?)
No local packages or download links found for setuptools-scm-git-archive
Traceback (most recent call last):
  File "setup.py", line 40, in <module>
    zip_safe=False)
  File "/Users/noe/anaconda/lib/python2.7/distutils/core.py", line 111, in setup
    _setup_distribution = dist = klass(attrs)
  File "/Users/noe/anaconda/lib/python2.7/site-packages/setuptools/dist.py", line 221, in __init__
    self.fetch_build_eggs(attrs.pop('setup_requires'))
  File "/Users/noe/anaconda/lib/python2.7/site-packages/setuptools/dist.py", line 245, in fetch_build_eggs
    parse_requirements(requires), installer=self.fetch_build_egg
  File "/Users/noe/anaconda/lib/python2.7/site-packages/pkg_resources.py", line 586, in resolve
    dist = best[req.key] = env.best_match(req, self, installer)
  File "/Users/noe/anaconda/lib/python2.7/site-packages/pkg_resources.py", line 831, in best_match
    return self.obtain(req, installer) # try and download/install
  File "/Users/noe/anaconda/lib/python2.7/site-packages/pkg_resources.py", line 843, in obtain
    return installer(requirement)
  File "/Users/noe/anaconda/lib/python2.7/site-packages/setuptools/dist.py", line 295, in fetch_build_egg
    return cmd.easy_install(req)
  File "/Users/noe/anaconda/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 598, in easy_install
    raise DistutilsError(msg)
distutils.errors.DistutilsError: Could not find suitable distribution for Requirement.parse('setuptools-scm-git-archive')

Loss function

The vampnet/examples calculates 3 different loss functions, vamp1, vamp2 and loss, but the paper mentions only vamp2. What is better to use?

What does a visual picture mean?

Here are four pictures that I got by executing example / Alanine_dipeptide_multiple_files, but I can't understand what they mean, so can I explain them in a nutshell?
The picture is shown below:
1
default
default
default

Thank you very much!

Estimating Koopman Operator with data from multiple trajectories

Hi,

in the Alanine dipeptide example with multiple files , pred_ord is calculated for total_data_source, i.e. for all trajectories. pred_ord is passed on to vamp.get_its and vamp.get_ck_test, both of which call vamp.estimate_koopman_op. pred_ord does not contain any information about the trajectories so vamp.estimate_koopman_op will consider pairs across trajectories when calculating c_tau e.g. traj1[-1] with traj2[tau-1]. These pairs are random since there's no connection between the trajectories.

In the example it's not a big issue since tau is much smaller than the length of each individual trajectory. Unless the random pairs correspond to transitions that are extremely rare, the Koopman operator won't differ much.

If that's the case though, what should one do? Estimate the Koopman operator for each individual trajectory and average all of them? I did that for the example, the transition probabilities involving the sparsely populated state 3 change a little bit.

image
image

Out of curiosity, is there any explanation for the negative values in the matrix or is that a numerical issue?

In other scenarios where tau might be in the range of 10^1 ns and trajectory lengths are in the range of 10^2 ns there would be a considerable number of random pairs and a large deviation in the Koopman operator.

Please let me know what you think. I assume I'll be working with trajectories that are 5-10 tau long so I'd like a convenience function that takes care of this. If you agree with the overall thought and think that averaging over all the Koopman operators from different trajectories is fine, I'll go ahead and write it.

Niklas

Alanine dipeptide multiple trajs poor scores / validation scores higher than training

Hi!

I'm trying to reproduce the alanine dipeptide notebooks. While the single trajectory one appears ok:

image

the 3 trajs (multiple files) one scores poorly and strangely has validation scores higher than the training scores. I observe these consistently over 50 attempts:

image

All newest releases, except tensorflow which is 1.12. I remember @ppxasjsm saw the same thing when she took us through her attempts a while ago.

cc @jchodera

tensorflow version issue

Hi, can you add some version information in the Readme file? Because the latest tensorflow and keras will encounter errors when running the examples.

TypeError: 'numpy.float64' object cannot be interpreted as an integer

i m try to run this code from this jupyter https://github.com/markovmodel/deeptime/blob/master/vampnet/examples/Alanine_dipeptide_multiple_files.ipynb

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
part of the code at which i getting error
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
max_vm = 0
attempts_number = 10

Pretraining with VAMP with 'symmetrized' matrices yields a bad approximation of the

eigenvectors per se, but improves the 'readability' of the states identified by VAMP-2

which would otherwise be difficult to interprete.

IMPORTANT: the function vamp.loss_VAMP2_autograd can only be used with tensorflow 1.6 or more recent.

For older versions of TF, use the function vamp.loss_VAMP2

losses = [
vamp.loss_VAMP2_autograd,
vamp._loss_VAMP_sym,
vamp.loss_VAMP2_autograd,
]

for attempt in range(attempts_number):

# Clear the previous tensorflow session to prevent memory leaks
clear_session()

# Build the model


nodes = [layer_width]*network_depth

Data_X = Input(shape = (input_size,))
Data_Y = Input(shape = (input_size,))

# A batch normalization layer improves convergence speed
bn_layer = BatchNormalization()

# Instance layers and assign them to the two lobes of the network
dense_layers = [Dense(node, activation = 'relu',)
                for node in nodes]

lx_branch = bn_layer(Data_X)
rx_branch = bn_layer(Data_Y)

for i, layer in enumerate(dense_layers):

    lx_branch = dense_layers[i](lx_branch)
    rx_branch = dense_layers[i](rx_branch)


# Add a softmax output layer.
# Should be replaced with a linear activation layer if
# the outputs of the network cannot be interpreted as states
softmax = Dense(output_size, activation='softmax')

lx_branch = softmax(lx_branch)
rx_branch = softmax(rx_branch)

# Merge both networks to train both at the same time
merged = concatenate([lx_branch, rx_branch])

# Initialize the model and the optimizer, and compile it with
# the loss and metric functions from the VAMPnets package
model = Model(inputs = [Data_X, Data_Y], outputs = merged)
adam = Adam(lr = learning_rate)

vm1 = np.zeros((len(losses), nb_epoch))
tm1 = np.zeros_like(vm1)
vm2 = np.zeros_like(vm1)
tm2 = np.zeros_like(vm1)

for l_index, loss_function in enumerate(losses):

    
    model.compile(optimizer = adam,
                  loss = loss_function,
                  metrics = [
                      vamp.metric_VAMP,
                      vamp.metric_VAMP2,
                             ])


    # Train the model
    
    steps_per_train_epoch = np.sum(np.ceil((train_data_source.trajectory_lengths()-tau)/batch_size))
    steps_per_valid_epoch = np.sum(np.ceil((valid_data_source.trajectory_lengths()-tau)/batch_size))
    
    hist = model.fit_generator(generator = vamp_data_loader.build_generator_on_source(train_data_source,
                                                          batch_size,
                                                          tau,
                                                          output_size),
                               steps_per_epoch = steps_per_train_epoch,
                               epochs = nb_epoch,
                               verbose = 0,
                               validation_data = vamp_data_loader.build_generator_on_source(valid_data_source,
                                                                batch_size,
                                                                tau,
                                                                output_size),
                               validation_steps = steps_per_valid_epoch,
                               shuffle = True
                              )


    vm1[l_index] = np.array(hist.history['val_metric_VAMP'])
    tm1[l_index] = np.array(hist.history['metric_VAMP'])
    
    vm2[l_index] = np.array(hist.history['val_metric_VAMP2'])
    tm2[l_index] = np.array(hist.history['metric_VAMP2'])
    

vm1 = np.reshape(vm1, (-1))
tm1 = np.reshape(tm1, (-1))
vm2 = np.reshape(vm2, (-1))
tm2 = np.reshape(tm2, (-1))

# Average the score obtained in the last part of the training process
# in order to estabilish which model is better and thus worth saving


score = vm1[-5:].mean()
t_score = tm1[-5:].mean()
extra_msg = ''
if score > max_vm:
    extra_msg = ' - Highest'
    best_weights = model.get_weights()
    max_vm = score
    vm1_max = vm1
    tm1_max = tm1
    vm2_max = vm2
    tm2_max = tm2
    
print('Attempt {0}, training score: {1:.2f}, validation score: {2:.2f}'.format(attempt+1, t_score, score) + extra_msg)

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::
error that i m getting
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.

WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.

WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/vampnet-0.1.4.dev13+g7b9cbd9-py3.7.egg/vampnet/vampnet.py:660: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.

WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/vampnet-0.1.4.dev13+g7b9cbd9-py3.7.egg/vampnet/vampnet.py:660: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.

WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/vampnet-0.1.4.dev13+g7b9cbd9-py3.7.egg/vampnet/vampnet.py:612: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.

WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/vampnet-0.1.4.dev13+g7b9cbd9-py3.7.egg/vampnet/vampnet.py:612: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.


TypeError Traceback (most recent call last)
in
96 output_size),
97 validation_steps = steps_per_valid_epoch,
---> 98 shuffle = True
99 )
100

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1424 use_multiprocessing=use_multiprocessing,
1425 shuffle=shuffle,
-> 1426 initial_epoch=initial_epoch)
1427
1428 def evaluate_generator(self,

/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_generator.py in model_iteration(model, data, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch, mode, batch_size, **kwargs)
174 progbar.on_epoch_begin(epoch, epoch_logs)
175
--> 176 for step in range(steps_per_epoch):
177 batch_data = _get_next_batch(output_generator, mode)
178 if batch_data is None:

TypeError: 'numpy.float64' object cannot be interpreted as an integer

Issue: installing vampnet

HAve been proceeding as instructed with the installation of tensorflow and vampnet.

I followed the install tensorflow on Windows 7 using anaconda. This had me create a tensorflow environment, and install it there. Followed the instructions on the tensorflow website and this worked fine. The tensorflow program Hello ... worked fine.

Now I am trying to install vampnet. Did a github clone of deeptime-master onto my desktop. So, still with the tensorflow environment activated I cloned the git hub deeptime-master, which looked like a complete github respositoty (I think - 3 git.. files are included, plus a vampnet directory.) So went into vampnet directory and ran python setup .py install.

This failed on line 339 in the setup.py and also gave me: LookupError: setuptools-scm was unable to detect versions for 'C: ..... \deeptime-master'

Same exaact thing happened when I tried to install tensorflow and vampnet on my macbook pro.

Can you give me any help with this?

Thanks

DLB

Is there data available?

Hello, I am paying attention to your paper "VAMPnets for deep learning of molecular kinetics". I want to see what the data should look like first. So, is it possible to provide the download address of the data?

name K in not defined in notebook 1D_double_well

Hi Andreas and Luca,

I am trying to use your notebook 1D_double_well, and it looks like the variable K that is used in the cell that performs the training is not defined. If the cell is run more than once, it attempts to clear the variable K but it can't find it. See below.

if 'model' in globals():
    del model
    K.clear_session()
    
# Build the model
Data_X = Input(shape = (input_size,))
Data_Y = Input(shape = (input_size,))
[...]
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-30-c75ab1430314> in <module>()
      1 if 'model' in globals():
      2     del model
----> 3     K.clear_session()
      4 
      5 # Build the model

NameError: name 'K' is not defined

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.