Giter Site home page Giter Site logo

umol's People

Contributors

mainguyenanhvu avatar patrickbryant1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

umol's Issues

Error 'bash predict.sh'

Source 'gaff-2.11' could not be read. If this is a file, ensure that the path is correct.
Looked in the following paths and found no files named 'gaff-2.11':
/home/dhseo/Data_HDD2/Umol
/home/dhseo/mambaforge/envs/umol/lib/python3.12/site-packages/openff/amber_ff_ports/offxml
/home/dhseo/mambaforge/envs/umol/lib/python3.12/site-packages/smirnoff99frosst/offxml
/home/dhseo/mambaforge/envs/umol/lib/python3.12/site-packages/openforcefields/offxml
If 'gaff-2.11' is present as a file, ensure it is in a known SMIRNOFF encoding.
Valid formats are: ['XML']
Parsing failed while trying to parse source as a file with the following exception and message:
<class 'openff.toolkit.utils.exceptions.SMIRNOFFParseError'>
syntax error: line 1, column 0

Traceback (most recent call last):
File "/home/dhseo/Data_HDD2/Umol/./src/relax/openmm_relax.py", line 180, in
system_generator = SystemGenerator(
^^^^^^^^^^^^^^^^
File "/home/dhseo/mambaforge/envs/umol/lib/python3.12/site-packages/openmmforcefields/generators/system_generators.py", line 269, in init
raise GAFFNotSupportedError(
openmmforcefields.generators.template_generators.GAFFNotSupportedError: This release (0.13.x) of openmmforcefields temporarily drops GAFF support and thereby the GAFFTemplateGenerator class. Support will be re-introduced in future releases (0.14.x). To use this class, install version 0.12.0 or older.
/home/dhseo/mambaforge/envs/umol/lib/python3.12/site-packages/Bio/PDB/PDBParser.py:395: PDBConstructionWarning: Ignoring unrecognized record 'END' at line 1249
warnings.warn(
Traceback (most recent call last):
File "/home/dhseo/Data_HDD2/Umol/./src/relax/add_plddt_to_relaxed.py", line 111, in
relaxed_coords, relaxed_chains, relaxed_atom_numbers, relaxed_3seq, relaxed_resnos, relaxed_atoms, relaxed_bfactors = read_pdb(args.relaxed_complex[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dhseo/Data_HDD2/Umol/./src/relax/add_plddt_to_relaxed.py", line 28, in read_pdb
f=open(pdbname,'rt')
^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: './data/test_case/7NB4//7NB4_relaxed_complex.pdb'
The final relaxed structure can be found at ./data/test_case/7NB4//7NB4'_relaxed_plddt.pdb'

Different models

Hi,

I was wondering if there is any way it can be generated a number of good ranked models, As of now I only get one model, but I can see the model sometimes gives a bad pose. Just wondering if it is possible to generate more than one pose with different rankings. or the other way is to have control of the parameters to get a different result. Well, I dont know well the code, better to ask. Great tool by the way, in other cases this approach works really well.

Best wishes,

Cesar

Error during conda env creation

...
Collecting tb-nightly==2.16.0a20231211 (from -r /path/to/bin/Umol/condaenv.h37_frer.requirements.txt (line 57))
  Downloading tb_nightly-2.16.0a20231211-py3-none-any.whl.metadata (1.8 kB)
Collecting tensorboard-data-server==0.7.2 (from -r /path/to/bin/Umol/condaenv.h37_frer.requirements.txt (line 58))
  Downloading tensorboard_data_server-0.7.2-py3-none-manylinux_2_31_x86_64.whl.metadata (1.1 kB)
Collecting tensorstore==0.1.51 (from -r /path/to/bin/Umol/condaenv.h37_frer.requirements.txt (line 59))
  Downloading tensorstore-0.1.51-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Collecting termcolor==2.4.0 (from -r /path/to/bin/Umol/condaenv.h37_frer.requirements.txt (line 60))
  Downloading termcolor-2.4.0-py3-none-any.whl.metadata (6.1 kB)
Collecting tf-estimator-nightly==2.14.0.dev2023080308 (from -r /path/to/bin/Umol/condaenv.h37_frer.requirements.txt (line 61))
  Downloading tf_estimator_nightly-2.14.0.dev2023080308-py2.py3-none-any.whl.metadata (1.3 kB)
Collecting tf-keras-nightly==2.16.0.dev2023121110 (from -r /path/to/bin/Umol/condaenv.h37_frer.requirements.txt (line 62))
  Downloading tf_keras_nightly-2.16.0.dev2023121110-py3-none-any.whl.metadata (1.5 kB)

Pip subprocess error:
ERROR: Could not find a version that satisfies the requirement tf-nightly==2.16.0.dev20231211 (from versions: 2.16.0.dev20231225, 2.16.0.dev20231226, 2.16.0.dev20231227, 2.16.0.dev20231228, 2.16.0.dev20231229, 2.16.0.dev20231230, 2.16.0.dev20231231, 2.16.0.dev20240101, 2.16.0.dev20240102, 2.16.0.dev20240103, 2.16.0.dev20240104, 2.16.0.dev20240105, 2.16.0.dev20240106, 2.16.0.dev20240107, 2.16.0.dev20240108, 2.16.0.dev20240110, 2.16.0.dev20240119, 2.16.0.dev20240124, 2.16.0.dev20240125, 2.16.0.dev20240126, 2.16.0.dev20240127, 2.16.0.dev20240128, 2.16.0.dev20240129, 2.16.0.dev20240130, 2.16.0.dev20240201, 2.16.0.dev20240202, 2.16.0.dev20240203, 2.16.0.dev20240204, 2.16.0.dev20240205, 2.16.0.dev20240206, 2.16.0.dev20240207, 2.16.0.dev20240209, 2.16.0, 2.17.0.dev20240213, 2.17.0.dev20240214, 2.17.0.dev20240215, 2.17.0.dev20240216, 2.17.0.dev20240217, 2.17.0.dev20240218, 2.17.0.dev20240219, 2.17.0.dev20240220, 2.17.0.dev20240221, 2.17.0.dev20240222, 2.17.0.dev20240223, 2.17.0.dev20240225, 2.17.0.dev20240226, 2.17.0.dev20240227, 2.17.0.dev20240228, 2.17.0.dev20240229, 2.17.0.dev20240301, 2.17.0.dev20240302, 2.17.0.dev20240303, 2.17.0.dev20240304, 2.17.0.dev20240305, 2.17.0.dev20240306, 2.17.0.dev20240308, 2.17.0.dev20240309, 2.17.0.dev20240310, 2.17.0.dev20240312, 2.17.0.dev20240313, 2.17.0.dev20240314, 2.17.0.dev20240315, 2.17.0.dev20240316, 2.17.0.dev20240317, 2.17.0.dev20240318, 2.17.0.dev20240319, 2.17.0.dev20240320, 2.17.0.dev20240322, 2.17.0.dev20240323, 2.17.0.dev20240324)
ERROR: No matching distribution found for tf-nightly==2.16.0.dev20231211

failed

CondaEnvException: Pip failed

I'll try with tf-nightly==2.16.0 and less strict version requirements for tf-related pip packages later.

'Config' object has no attribute 'define_bool_state'

After running the following as in the test example:
conda activate umol bash predict.sh

I got the following error from configurations.py:
AttributeError: 'Config' object has no attribute 'define_bool_state'

Environment failed to resolve

Hi,
I am trying to create the environment with the environment.yml file but am getting the following error:

Could not solve for environment specs
The following packages are incompatible
├─ ambertools ==23.3 py312h1577c9a_6 is requested and can be installed;
└─ openmmforcefields ==0.11.2 pyhd8ed1ab_1 is not installable because it requires
└─ ambertools >=20.0,<23 , which conflicts with any installable versions previously reported.

Is there a work around for this?

Thank you for the help.

Target pos

Hi,

I'm trying to run Umol but currently I'm stuck at the "Predict" step and don't know how to get the "target_pos $POCKET_INDICES" data. Can you help me?
Thanks for your program and I hope to receive your response.

Best wishes,
Livia.

FutureWarning: jax.tree_flatten is deprecated, and will be removed in a future release. Use jax.tree_util.tree_flatten instead.

I think you should change the code to prevent the warning.

/home/tools/umol_package/Umol/src/net/model/mapping.py:49: FutureWarning: jax.tree_flatten is deprecated, and will be removed in a future release. Use jax.tree_util.tree_flatten instead.
values_tree_def = jax.tree_flatten(values)[1]
/home/tools/umol_package/Umol/src/net/model/mapping.py:53: FutureWarning: jax.tree_unflatten is deprecated, and will be removed in a future release. Use jax.tree_util.tree_unflatten instead.
return jax.tree_unflatten(values_tree_def, flat_axes)
/home/tools/umol_package/Umol/src/net/model/mapping.py:124: FutureWarning: jax.tree_flatten is deprecated, and will be removed in a future release. Use jax.tree_util.tree_flatten instead.
flat_sizes = jax.tree_flatten(in_sizes)[0]

colab Umol "predict the protein-ligand complex structure" cell

I uploaded the .a3m file from HHBlits as outlined in the first cell, but run into this error?

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
[<ipython-input-8-55efa3eb56f8>](https://localhost:8080/#) in <cell line: 10>()
      8 
      9 #Predict
---> 10 predict(config.CONFIG,
     11             MSA_FEATS,
     12             LIGAND_FEATS,

7 frames
[/usr/local/lib/python3.10/dist-packages/tensorflow/python/framework/ops.py](https://localhost:8080/#) in _create_c_op(graph, node_def, inputs, control_inputs, op_def, extract_traceback)
   1965   except errors.InvalidArgumentError as e:
   1966     # Convert to ValueError for backwards compatibility.
-> 1967     raise ValueError(e.message)
   1968 
   1969   # Record the current Python stack trace as the creating stacktrace of this

ValueError: Cannot reshape a tensor with 345790 elements to shape [2290,344,1] (787760 elements) for '{{node reshape_msa}} = Reshape[T=DT_INT32, Tshape=DT_INT32](Const_6, reshape_msa/shape)' with input shapes: [2290,151], [3] and with input tensors computed as partial shapes: input[1] = [2290,344,1].

The example in the notebook works fine, so it maybe my formatting, just thought I'd raise so you're aware. I'll try a local install and see if I can get past this.

How to output more predicted poses

Hi Patrick, thank you for sharing the Colab version of Umol. It works well and is friendly to Python freshmen. I’m just wondering if Umol can output more predicted poses (e.g., the top 10 poses), which would be more thorough for the next steps of MD filtering and SAR study.

Training Protocol

Hello Patrick,

Thank you for sharing your amazing work!

Forgive me if I missed something but I could not find the details for your training regiment in your article. I am particularly interested in how training was performed on large proteins and ligands (>500 tokens). On page 10 you mention that "15 complexes are out of memory" and that you "crop these to 500 residues", did you do the same for training? Did you randomly crop proteins like in AF2 and if so what sequence size did you choose?

Thank you for your help in advance.

Predict the protein-ligand structure Error

Hello,
I am trying to run on the Colab
With the Input: ID:1ct9_happy
LIGAND: OC(=O)CC(C(=O)O)N
SEQUENCE:DDLQGMFAFALYDSEKDAYLIGRDHLGIIPLYMGYDEHGQLYVASEMKALVPVCRTIKEFPAGSYLWSQDGEIRSYYHRDWFDYDAVKDNVTDKNELRQALEDSVKSHLMSDVPYGVLLSGGLDSSIISAITKKYAARRVEDQERSEAWWPQLHSFAVGLPGSPDLKAAQEVANHLGTVHHEIHFTVQEGLDAIRDVIYHIETYDVTTIRASTPMYLMSRKIKAMGIKMVLSGEGSDEVFGGYLYFHKAPNAKELHEETVRKLLALHMYDCARANKAMSAWGVEARVPFLDKKFLDVAMRINPQDKMCGNGKMEKHILRECFEAYLPASVAWRQKEQFSDGVGYSWIDTLKEVAAQQVSDQQLETARFRFPYNTPTSKEAYLYREIFEELFPLPSAAECVPGGPSVACSSAKAIEWDEAFKKMDDPSGRAVGVHQSAYK
TARGET_POSITIONS:117,120,144,178
NUM_RECYCLES:3
When I try to run my example I get the error in the Predict the protein-ligand structure section :
XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 13608450984 bytes.
BufferAssignment OOM Debugging. (I am running on GPU)

Can you help me resolve this problem please?

Sequence Length Limit

Looking at the Colab notebook, there seems to be a sequence length limit of 400. However, this is not apparent anywhere else in the code or manuscript.

Where does this limit come from? Is it only due to computational constraints (i.e. VRAM)? Can this limit be surpassed with larger GPUs?

conda enviroment problem

Hi,

i have the issue below when i installed it:

Could not solve for environment specs
The following packages are incompatible
├─ ambertools ==23.3 py312h1577c9a_6 is requested and can be installed;
└─ openmmforcefields ==0.11.2 pyhd8ed1ab_1 is not installable because it requires
└─ ambertools >=20.0,<23 , which conflicts with any installable versions previously reported.
I ried to install the 'openmmforcefield' in the latest version (1.13) which is compatible with the others. But it failed to go through the sample test, which incidated that openmmforcefield1.13 does support GAFF at the moment. Could you help me to figure out the problem for the installation of conda enviroment. Thanks

Some question about giant Protein

Hi Mr Bryant:

When I tried to predicte some huge proteins (about 1179 aminos)
It return error like below:
XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 45026377728 bytes.

I've ran the code at Tesla A100 GPU and tried set os.environ['XLA_PYTHON_CLIENT_PREALLOCATE'] = 'false'
but it still return same error.

May I ask if you have any good advice on this? For example, predicting in parts, but I'm not sure if this will affect the prediction accuracy?

Any response would be helpful
Gratefully!

IndexError: list index out of range, while created a new protein

Hu all,

I got an error when submmited a new protein.

Could help with this please? I have updated the msa, sequence and positions:
LIGAND = "N#CCC(=O)N(CC1)CC@@HN(C)c2ncnc(c23)[nH]cc3" # @param {type:"string"}
SEQUENCE = "RKSPLTLEDFKFLAVLGRGHFGKVLLSEFRPSGELFAIKALKKGDIVARDEVESLMCEKRILAAVTSAGHPFLVNLFGCFQTPEHVCFVMEYSAGGDLMLHIHSDVFSEPRAIFYSACVVLGLQFLHEHKIVYRDLKLDNLLLDTEGYVKIADFGLCKEGMGYGDRTSTFCGTPEFLAPEVLTDTSYTRAVDWWGLGVLLYEMLVGESPFPGDDEEEVFDSIVNDEVRYPRFLSAEAIGIMRRLLRRNPERRLGSSERDAEDVKKQPFFRTLGWEALLARRLPPPFVPTLSGRTDVSNFDEEFTGEAPTLSPPRDARPLTAAEQAAFLDFDFVAGGC" #@param {type:"string"}
TARGET_POSITIONS = "17,28,19,20,23,24,25,91,92,93,94" #@param {type:"string"}

it creates the proteinin first step but then when it creates the paramerts and the complex, it fails.

error:
File /cluster/ddu/cmmartinez001/Projects/Umol/content/Umol/src/make_msa_seq_feats_colab.py:98, in process(input_fasta_path, input_msas)
96 parsed_msa, parsed_deletion_matrix, _ = parsers.parse_stockholm(msa)
97 elif custom_msa[-3:] == 'a3m':
---> 98 parsed_msa, parsed_deletion_matrix = parsers.parse_a3m(msa)
99 else: raise TypeError('Unknown format for input MSA, please make sure '
100 'the MSA files you provide terminates with (and '
101 'are formatted as) .sto or .a3m')
102 parsed_msas.append(parsed_msa)

File /cluster/ddu/cmmartinez001/Projects/Umol/content/Umol/src/net/data/parsers.py:142, in parse_a3m(a3m_string)
127 def parse_a3m(a3m_string: str) -> Tuple[Sequence[str], DeletionMatrix]:
128 """Parses sequences and deletion matrix from a3m format alignment.
129
130 Args:
(...)
140 the aligned sequence i at residue position j.
141 """
--> 142 sequences, _ = parse_fasta(a3m_string)
143 deletion_matrix = []
144 for msa_sequence in sequences:

File /cluster/ddu/cmmartinez001/Projects/Umol/content/Umol/src/net/data/parsers.py:62, in parse_fasta(fasta_string)
60 elif not line:
61 continue # Skip blank lines.
---> 62 sequences[index] += line
64 return sequences, descriptions

IndexError: list index out of range

Best wishes,

Cesar

[question]: Around training the model.

Hi! First of all, thanks for making your work so readily available.

I am looking to get a PyTorch reproduction of the repository going. I have not run into problems for inference (adapting from OpenFold and converting weights), but am running into a couple of challenges at train time, and wondered if you could help me understand some implementation details.


I see in the make_uniform function of the predict.py file that a comment mentions that the amino acid type if set to glycine, but the zero index that remains actually sets the amino acid to alanine. Wouldn't this matter for the pseudo_beta_fn and the inclusion of the ligand in the distogram loss?

# 20, where 20 is 'X'. Put 0 (GLY) for ligand atoms - will take care of lots of mapping inside the net


In the folding.py for the backbone_loss, a "atom14_gt_exists_protein" feature is built. I presume this contains atom masks for the protein only? As opposed to "atom14_gt_exists" which must contain atoms for the protein and ligand.

backbone_mask_protein = batch['atom14_gt_exists_protein'][:,0]

What about in the sidechain_loss?

flat_frames_mask = jnp.reshape(batch['rigidgroups_gt_exists']*batch['rigidgroups_gt_protein_exists'], [-1])


Thanks for your help!

RecursionError: maximum recursion depth exceeded

Did anyone experience this same error? if yes, what was the workaround approach?

File "/homes/cveranso/.conda/envs/Umol2/lib/python3.12/site-packages/numpy/core/_dtype.py", line 143, in _scalar_str
elif np.issubdtype(dtype, np.number):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/homes/cveranso/.conda/envs/Umol2/lib/python3.12/site-packages/numpy/core/numerictypes.py", line 417, in issubdtype
arg1 = dtype(arg1).type
^^^^^^^^^^^
File "/homes/cveranso/.conda/envs/Umol2/lib/python3.12/site-packages/numpy/core/_dtype.py", line 46, in repr
arg_str = _construction_repr(dtype, include_align=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RecursionError: maximum recursion depth exceeded

Traceback (most recent call last):
File "/home/disk6/homes/cveranso/Umol/Umol-master/./src/relax/align_ligand_conformer.py", line 192, in
pred_ligand = read_pdb(args.pred_pdb[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/disk6/homes/cveranso/Umol/Umol-master/./src/relax/align_ligand_conformer.py", line 28, in read_pdb
struc = parser.get_structure('',open(pred_pdb,'r'))
^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/homes/cveranso/Umol/Umol-master/data/test_case/6UWP/6UWP_pred_raw.pdb'

Selection of the interaction sites and pLDTT score

Hi Patrick,

I was wondering what in your experience would be a good selection on the interaction sites. I have noticed a selection has an impact on the outcome, for instance between 10 and 7 A. Another question would be if I test different sites from the same protein, the pLDTT could give a kind of score about the probably binding site?

Thanks a lot,

Cesar

Colab crashes due to error in Install dependencies step

Hello,

I am encountering some incompatibility issues between package versions that are preventing the Colab code from running properly (see below). I suspect that the problem may be related to the version of NumPy being used. Do you have any suggestions on how to resolve this issue?

Thank you.

Looking in links: https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
Requirement already satisfied: jax[cuda12_pip] in /usr/local/lib/python3.10/dist-packages (0.4.30)
Requirement already satisfied: jaxlib<=0.4.30,>=0.4.27 in /usr/local/lib/python3.10/dist-packages (from jax[cuda12_pip]) (0.4.30)
Requirement already satisfied: ml-dtypes>=0.2.0 in /usr/local/lib/python3.10/dist-packages (from jax[cuda12_pip]) (0.2.0)
Requirement already satisfied: numpy>=1.22 in /usr/local/lib/python3.10/dist-packages (from jax[cuda12_pip]) (1.22.4)
Requirement already satisfied: opt-einsum in /usr/local/lib/python3.10/dist-packages (from jax[cuda12_pip]) (3.3.0)
Collecting scipy>=1.9 (from jax[cuda12_pip])
Using cached scipy-1.14.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (41.1 MB)
Requirement already satisfied: jax-cuda12-plugin[with_cuda]<=0.4.30,>=0.4.30 in /usr/local/lib/python3.10/dist-packages (from jax[cuda12_pip]) (0.4.30)
Requirement already satisfied: jax-cuda12-pjrt==0.4.30 in /usr/local/lib/python3.10/dist-packages (from jax-cuda12-plugin[with_cuda]<=0.4.30,>=0.4.30->jax[cuda12_pip]) (0.4.30)
Requirement already satisfied: nvidia-cublas-cu12>=12.1.3.1 in /usr/local/lib/python3.10/dist-packages (from jax-cuda12-plugin[with_cuda]<=0.4.30,>=0.4.30->jax[cuda12_pip]) (12.5.3.2)
Requirement already satisfied: nvidia-cuda-cupti-cu12>=12.1.105 in /usr/local/lib/python3.10/dist-packages (from jax-cuda12-plugin[with_cuda]<=0.4.30,>=0.4.30->jax[cuda12_pip]) (12.5.82)
Requirement already satisfied: nvidia-cuda-nvcc-cu12>=12.1.105 in /usr/local/lib/python3.10/dist-packages (from jax-cuda12-plugin[with_cuda]<=0.4.30,>=0.4.30->jax[cuda12_pip]) (12.5.82)
Requirement already satisfied: nvidia-cuda-runtime-cu12>=12.1.105 in /usr/local/lib/python3.10/dist-packages (from jax-cuda12-plugin[with_cuda]<=0.4.30,>=0.4.30->jax[cuda12_pip]) (12.5.82)
Requirement already satisfied: nvidia-cudnn-cu12<10.0,>=9.0 in /usr/local/lib/python3.10/dist-packages (from jax-cuda12-plugin[with_cuda]<=0.4.30,>=0.4.30->jax[cuda12_pip]) (9.2.0.82)
Requirement already satisfied: nvidia-cufft-cu12>=11.0.2.54 in /usr/local/lib/python3.10/dist-packages (from jax-cuda12-plugin[with_cuda]<=0.4.30,>=0.4.30->jax[cuda12_pip]) (11.2.3.61)
Requirement already satisfied: nvidia-cusolver-cu12>=11.4.5.107 in /usr/local/lib/python3.10/dist-packages (from jax-cuda12-plugin[with_cuda]<=0.4.30,>=0.4.30->jax[cuda12_pip]) (11.6.3.83)
Requirement already satisfied: nvidia-cusparse-cu12>=12.1.0.106 in /usr/local/lib/python3.10/dist-packages (from jax-cuda12-plugin[with_cuda]<=0.4.30,>=0.4.30->jax[cuda12_pip]) (12.5.1.3)
Requirement already satisfied: nvidia-nccl-cu12>=2.18.1 in /usr/local/lib/python3.10/dist-packages (from jax-cuda12-plugin[with_cuda]<=0.4.30,>=0.4.30->jax[cuda12_pip]) (2.22.3)
Requirement already satisfied: nvidia-nvjitlink-cu12>=12.1.105 in /usr/local/lib/python3.10/dist-packages (from jax-cuda12-plugin[with_cuda]<=0.4.30,>=0.4.30->jax[cuda12_pip]) (12.5.82)
Collecting numpy>=1.22 (from jax[cuda12_pip])
Using cached numpy-2.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (19.3 MB)
Installing collected packages: numpy, scipy
Attempting uninstall: numpy
Found existing installation: numpy 1.22.4
Uninstalling numpy-1.22.4:
Successfully uninstalled numpy-1.22.4
Attempting uninstall: scipy
Found existing installation: scipy 1.7.3
Uninstalling scipy-1.7.3:
Successfully uninstalled scipy-1.7.3
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
astropy 5.3.4 requires numpy<2,>=1.21, but you have numpy 2.0.0 which is incompatible.
cudf-cu12 24.4.1 requires numpy<2.0a0,>=1.23, but you have numpy 2.0.0 which is incompatible.
cudf-cu12 24.4.1 requires pandas<2.2.2dev0,>=2.0, but you have pandas 1.3.5 which is incompatible.
cudf-cu12 24.4.1 requires protobuf<5,>=3.20, but you have protobuf 3.19.6 which is incompatible.
cupy-cuda12x 12.2.0 requires numpy<1.27,>=1.20, but you have numpy 2.0.0 which is incompatible.
ibis-framework 8.0.0 requires numpy<2,>=1, but you have numpy 2.0.0 which is incompatible.
numba 0.58.1 requires numpy<1.27,>=1.22, but you have numpy 2.0.0 which is incompatible.
optax 0.2.2 requires chex>=0.1.86, but you have chex 0.1.5 which is incompatible.
pandas-gbq 0.19.2 requires google-auth-oauthlib>=0.7.0, but you have google-auth-oauthlib 0.4.6 which is incompatible.
plotnine 0.12.4 requires pandas>=1.5.0, but you have pandas 1.3.5 which is incompatible.
rmm-cu12 24.4.0 requires numpy<2.0a0,>=1.23, but you have numpy 2.0.0 which is incompatible.
statsmodels 0.14.2 requires pandas!=2.1.0,>=1.4, but you have pandas 1.3.5 which is incompatible.
tensorflow-datasets 4.9.6 requires protobuf>=3.20, but you have protobuf 3.19.6 which is incompatible.
tf-keras 2.15.1 requires tensorflow<2.16,>=2.15, but you have tensorflow 2.11.0 which is incompatible.
thinc 8.2.5 requires numpy<2.0.0,>=1.19.0; python_version >= "3.9", but you have numpy 2.0.0 which is incompatible.
xarray 2023.7.0 requires pandas>=1.4, but you have pandas 1.3.5 which is incompatible.
Successfully installed numpy-2.0.0 scipy-1.14.0
Requirement already satisfied: ml-collections==0.1.1 in /usr/local/lib/python3.10/dist-packages (0.1.1)
Requirement already satisfied: absl-py in /usr/local/lib/python3.10/dist-packages (from ml-collections==0.1.1) (1.4.0)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.10/dist-packages (from ml-collections==0.1.1) (6.0.1)
Requirement already satisfied: six in /usr/local/lib/python3.10/dist-packages (from ml-collections==0.1.1) (1.16.0)
Requirement already satisfied: contextlib2 in /usr/local/lib/python3.10/dist-packages (from ml-collections==0.1.1) (21.6.0)
Requirement already satisfied: dm-haiku==0.0.11 in /usr/local/lib/python3.10/dist-packages (0.0.11)
Requirement already satisfied: absl-py>=0.7.1 in /usr/local/lib/python3.10/dist-packages (from dm-haiku==0.0.11) (1.4.0)
Requirement already satisfied: jmp>=0.0.2 in /usr/local/lib/python3.10/dist-packages (from dm-haiku==0.0.11) (0.0.4)
Requirement already satisfied: numpy>=1.18.0 in /usr/local/lib/python3.10/dist-packages (from dm-haiku==0.0.11) (2.0.0)
Requirement already satisfied: tabulate>=0.8.9 in /usr/local/lib/python3.10/dist-packages (from dm-haiku==0.0.11) (0.9.0)
Requirement already satisfied: flax>=0.7.1 in /usr/local/lib/python3.10/dist-packages (from dm-haiku==0.0.11) (0.8.4)
Requirement already satisfied: jax>=0.4.19 in /usr/local/lib/python3.10/dist-packages (from flax>=0.7.1->dm-haiku==0.0.11) (0.4.30)
Requirement already satisfied: msgpack in /usr/local/lib/python3.10/dist-packages (from flax>=0.7.1->dm-haiku==0.0.11) (1.0.8)
Requirement already satisfied: optax in /usr/local/lib/python3.10/dist-packages (from flax>=0.7.1->dm-haiku==0.0.11) (0.2.2)
Requirement already satisfied: orbax-checkpoint in /usr/local/lib/python3.10/dist-packages (from flax>=0.7.1->dm-haiku==0.0.11) (0.4.4)
Requirement already satisfied: tensorstore in /usr/local/lib/python3.10/dist-packages (from flax>=0.7.1->dm-haiku==0.0.11) (0.1.45)
Requirement already satisfied: rich>=11.1 in /usr/local/lib/python3.10/dist-packages (from flax>=0.7.1->dm-haiku==0.0.11) (13.7.1)
Requirement already satisfied: typing-extensions>=4.2 in /usr/local/lib/python3.10/dist-packages (from flax>=0.7.1->dm-haiku==0.0.11) (4.12.2)
Requirement already satisfied: PyYAML>=5.4.1 in /usr/local/lib/python3.10/dist-packages (from flax>=0.7.1->dm-haiku==0.0.11) (6.0.1)
Requirement already satisfied: jaxlib<=0.4.30,>=0.4.27 in /usr/local/lib/python3.10/dist-packages (from jax>=0.4.19->flax>=0.7.1->dm-haiku==0.0.11) (0.4.30)
Requirement already satisfied: ml-dtypes>=0.2.0 in /usr/local/lib/python3.10/dist-packages (from jax>=0.4.19->flax>=0.7.1->dm-haiku==0.0.11) (0.2.0)
Requirement already satisfied: opt-einsum in /usr/local/lib/python3.10/dist-packages (from jax>=0.4.19->flax>=0.7.1->dm-haiku==0.0.11) (3.3.0)
Requirement already satisfied: scipy>=1.9 in /usr/local/lib/python3.10/dist-packages (from jax>=0.4.19->flax>=0.7.1->dm-haiku==0.0.11) (1.14.0)
Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.10/dist-packages (from rich>=11.1->flax>=0.7.1->dm-haiku==0.0.11) (3.0.0)
Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/local/lib/python3.10/dist-packages (from rich>=11.1->flax>=0.7.1->dm-haiku==0.0.11) (2.16.1)
Collecting chex>=0.1.86 (from optax->flax>=0.7.1->dm-haiku==0.0.11)
Using cached chex-0.1.86-py3-none-any.whl (98 kB)
Requirement already satisfied: etils[epath,epy] in /usr/local/lib/python3.10/dist-packages (from orbax-checkpoint->flax>=0.7.1->dm-haiku==0.0.11) (1.7.0)
Requirement already satisfied: nest_asyncio in /usr/local/lib/python3.10/dist-packages (from orbax-checkpoint->flax>=0.7.1->dm-haiku==0.0.11) (1.6.0)
Requirement already satisfied: protobuf in /usr/local/lib/python3.10/dist-packages (from orbax-checkpoint->flax>=0.7.1->dm-haiku==0.0.11) (3.19.6)
Requirement already satisfied: toolz>=0.9.0 in /usr/local/lib/python3.10/dist-packages (from chex>=0.1.86->optax->flax>=0.7.1->dm-haiku==0.0.11) (0.12.1)
Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.10/dist-packages (from markdown-it-py>=2.2.0->rich>=11.1->flax>=0.7.1->dm-haiku==0.0.11) (0.1.2)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from etils[epath,epy]->orbax-checkpoint->flax>=0.7.1->dm-haiku==0.0.11) (2023.6.0)
Requirement already satisfied: importlib_resources in /usr/local/lib/python3.10/dist-packages (from etils[epath,epy]->orbax-checkpoint->flax>=0.7.1->dm-haiku==0.0.11) (6.4.0)
Requirement already satisfied: zipp in /usr/local/lib/python3.10/dist-packages (from etils[epath,epy]->orbax-checkpoint->flax>=0.7.1->dm-haiku==0.0.11) (3.19.2)
Installing collected packages: chex
Attempting uninstall: chex
Found existing installation: chex 0.1.5
Uninstalling chex-0.1.5:
Successfully uninstalled chex-0.1.5
Successfully installed chex-0.1.86
Requirement already satisfied: pandas==1.3.5 in /usr/local/lib/python3.10/dist-packages (1.3.5)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.10/dist-packages (from pandas==1.3.5) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.10/dist-packages (from pandas==1.3.5) (2023.4)
Requirement already satisfied: numpy>=1.21.0 in /usr/local/lib/python3.10/dist-packages (from pandas==1.3.5) (2.0.0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.7.3->pandas==1.3.5) (1.16.0)
Requirement already satisfied: biopython==1.81 in /usr/local/lib/python3.10/dist-packages (1.81)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from biopython==1.81) (2.0.0)
Collecting chex==0.1.5
Using cached chex-0.1.5-py3-none-any.whl (85 kB)
Requirement already satisfied: absl-py>=0.9.0 in /usr/local/lib/python3.10/dist-packages (from chex==0.1.5) (1.4.0)
Requirement already satisfied: dm-tree>=0.1.5 in /usr/local/lib/python3.10/dist-packages (from chex==0.1.5) (0.1.8)
Requirement already satisfied: jax>=0.1.55 in /usr/local/lib/python3.10/dist-packages (from chex==0.1.5) (0.4.30)
Requirement already satisfied: jaxlib>=0.1.37 in /usr/local/lib/python3.10/dist-packages (from chex==0.1.5) (0.4.30)
Requirement already satisfied: numpy>=1.18.0 in /usr/local/lib/python3.10/dist-packages (from chex==0.1.5) (2.0.0)
Requirement already satisfied: toolz>=0.9.0 in /usr/local/lib/python3.10/dist-packages (from chex==0.1.5) (0.12.1)
Requirement already satisfied: ml-dtypes>=0.2.0 in /usr/local/lib/python3.10/dist-packages (from jax>=0.1.55->chex==0.1.5) (0.2.0)
Requirement already satisfied: opt-einsum in /usr/local/lib/python3.10/dist-packages (from jax>=0.1.55->chex==0.1.5) (3.3.0)
Requirement already satisfied: scipy>=1.9 in /usr/local/lib/python3.10/dist-packages (from jax>=0.1.55->chex==0.1.5) (1.14.0)
Installing collected packages: chex
Attempting uninstall: chex
Found existing installation: chex 0.1.86
Uninstalling chex-0.1.86:
Successfully uninstalled chex-0.1.86
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
optax 0.2.2 requires chex>=0.1.86, but you have chex 0.1.5 which is incompatible.
Successfully installed chex-0.1.5
Requirement already satisfied: dm-tree==0.1.8 in /usr/local/lib/python3.10/dist-packages (0.1.8)
Requirement already satisfied: immutabledict==2.0.0 in /usr/local/lib/python3.10/dist-packages (2.0.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (2.0.0)
Collecting scipy==1.7.3
Using cached scipy-1.7.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (39.9 MB)
Collecting numpy<1.23.0,>=1.16.5 (from scipy==1.7.3)
Using cached numpy-1.22.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.8 MB)
Installing collected packages: numpy, scipy
Attempting uninstall: numpy
Found existing installation: numpy 2.0.0
Uninstalling numpy-2.0.0:
Successfully uninstalled numpy-2.0.0
Attempting uninstall: scipy
Found existing installation: scipy 1.14.0
Uninstalling scipy-1.14.0:
Successfully uninstalled scipy-1.14.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
arviz 0.15.1 requires scipy>=1.8.0, but you have scipy 1.7.3 which is incompatible.
cudf-cu12 24.4.1 requires numpy<2.0a0,>=1.23, but you have numpy 1.22.4 which is incompatible.
cudf-cu12 24.4.1 requires pandas<2.2.2dev0,>=2.0, but you have pandas 1.3.5 which is incompatible.
cudf-cu12 24.4.1 requires protobuf<5,>=3.20, but you have protobuf 3.19.6 which is incompatible.
jax 0.4.30 requires scipy>=1.9, but you have scipy 1.7.3 which is incompatible.
jaxlib 0.4.30 requires scipy>=1.9, but you have scipy 1.7.3 which is incompatible.
numexpr 2.10.1 requires numpy>=1.23.0, but you have numpy 1.22.4 which is incompatible.
optax 0.2.2 requires chex>=0.1.86, but you have chex 0.1.5 which is incompatible.
pandas-gbq 0.19.2 requires google-auth-oauthlib>=0.7.0, but you have google-auth-oauthlib 0.4.6 which is incompatible.
pandas-stubs 2.0.3.230814 requires numpy>=1.25.0; python_version >= "3.9", but you have numpy 1.22.4 which is incompatible.
plotnine 0.12.4 requires numpy>=1.23.0, but you have numpy 1.22.4 which is incompatible.
plotnine 0.12.4 requires pandas>=1.5.0, but you have pandas 1.3.5 which is incompatible.
rmm-cu12 24.4.0 requires numpy<2.0a0,>=1.23, but you have numpy 1.22.4 which is incompatible.
statsmodels 0.14.2 requires pandas!=2.1.0,>=1.4, but you have pandas 1.3.5 which is incompatible.
statsmodels 0.14.2 requires scipy!=1.9.2,>=1.8, but you have scipy 1.7.3 which is incompatible.
tensorflow-datasets 4.9.6 requires protobuf>=3.20, but you have protobuf 3.19.6 which is incompatible.
tf-keras 2.15.1 requires tensorflow<2.16,>=2.15, but you have tensorflow 2.11.0 which is incompatible.
xarray 2023.7.0 requires pandas>=1.4, but you have pandas 1.3.5 which is incompatible.
xarray-einstats 0.7.0 requires scipy>=1.8, but you have scipy 1.7.3 which is incompatible.
Successfully installed numpy-1.22.4 scipy-1.7.3
WARNING: The following packages were previously imported in this runtime:
[numpy]
You must restart the runtime in order to use newly installed versions.
Requirement already satisfied: tensorflow==2.11.0 in /usr/local/lib/python3.10/dist-packages (2.11.0)
Requirement already satisfied: absl-py>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (1.4.0)
Requirement already satisfied: astunparse>=1.6.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (1.6.3)
Requirement already satisfied: flatbuffers>=2.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (24.3.25)
Requirement already satisfied: gast<=0.4.0,>=0.2.1 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (0.4.0)
Requirement already satisfied: google-pasta>=0.1.1 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (0.2.0)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (1.64.1)
Requirement already satisfied: h5py>=2.9.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (3.9.0)
Requirement already satisfied: keras<2.12,>=2.11.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (2.11.0)
Requirement already satisfied: libclang>=13.0.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (18.1.1)
Requirement already satisfied: numpy>=1.20 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (1.22.4)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (3.3.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (24.1)
Requirement already satisfied: protobuf<3.20,>=3.9.2 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (3.19.6)
Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (67.7.2)
Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (1.16.0)
Requirement already satisfied: tensorboard<2.12,>=2.11 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (2.11.2)
Requirement already satisfied: tensorflow-estimator<2.12,>=2.11.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (2.11.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (2.4.0)
Requirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (4.12.2)
Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (1.14.1)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.11.0) (0.37.0)
Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.10/dist-packages (from astunparse>=1.6.0->tensorflow==2.11.0) (0.43.0)
Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.12,>=2.11->tensorflow==2.11.0) (2.27.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.12,>=2.11->tensorflow==2.11.0) (0.4.6)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.12,>=2.11->tensorflow==2.11.0) (3.6)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.12,>=2.11->tensorflow==2.11.0) (2.31.0)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.12,>=2.11->tensorflow==2.11.0) (0.6.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.12,>=2.11->tensorflow==2.11.0) (1.8.1)
Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from tensorboard<2.12,>=2.11->tensorflow==2.11.0) (3.0.3)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.12,>=2.11->tensorflow==2.11.0) (5.3.3)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.12,>=2.11->tensorflow==2.11.0) (0.4.0)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard<2.12,>=2.11->tensorflow==2.11.0) (4.9)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.10/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.12,>=2.11->tensorflow==2.11.0) (1.3.1)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2.21.0->tensorboard<2.12,>=2.11->tensorflow==2.11.0) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2.21.0->tensorboard<2.12,>=2.11->tensorflow==2.11.0) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2.21.0->tensorboard<2.12,>=2.11->tensorflow==2.11.0) (2.0.7)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2.21.0->tensorboard<2.12,>=2.11->tensorflow==2.11.0) (2024.6.2)
Requirement already satisfied: MarkupSafe>=2.1.1 in /usr/local/lib/python3.10/dist-packages (from werkzeug>=1.0.1->tensorboard<2.12,>=2.11->tensorflow==2.11.0) (2.1.5)
Requirement already satisfied: pyasn1<0.7.0,>=0.4.6 in /usr/local/lib/python3.10/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard<2.12,>=2.11->tensorflow==2.11.0) (0.6.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.10/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.12,>=2.11->tensorflow==2.11.0) (3.2.2)
Requirement already satisfied: rdkit-pypi in /usr/local/lib/python3.10/dist-packages (2022.9.5)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from rdkit-pypi) (1.22.4)
Requirement already satisfied: Pillow in /usr/local/lib/python3.10/dist-packages (from rdkit-pypi) (9.4.0)
Requirement already satisfied: py3Dmol in /usr/local/lib/python3.10/dist-packages (2.1.0)

Docker to run it

is there a docker to use it? the script installation fails for me

Multiple chains in protein sequence

Thanks for sharing the Umol code, it is a highly helpful resource. I want to try it on a protein with multiple (4) chains. Each chain contains around 900 residues and ligand interacts with residues from all 4 chains. I want to use only a subset of residues near the binding site as an input sequence. Is it possible to run Umol for such a case? If yes, how can i do it? e.g. what should be the format of sequence for MSA and Umol?

Predict ligand positions for large protein by multiple GPUs

Hi,

I am trying to predict ligand positions for a protein with around 700 residues, but get the error below:
XlaRuntimeError: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 17608450984 bytes (17.6GB).

I checked the 'predict.sh' and realized it was due to the out of memory when executing './src/prediction.py'. In fact, there are 4 RTX 4070 ti super GPUs (16GB ram each) in my machine and I wonder if I can allocate memory to multiple GPUs so that I can get enough memory for my prediction work. Many thanks.

ValueError: operands could not be broadcast together with shapes (19,19) (2861,2861)

Hi there,
Thanks for your guys develop such powerful tool.

Actually, I have some models predicted by colabfold, to make sure the data consistency, I decided use my own model on Umol.
But when I tried to upload my own Model predicted by colabfold, I recieve the error in this step, btw, I could foud the "_pred_raw.pdb" file were generated, but "generate_best_conformer" seems like could not handle it.
`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in <cell line: 9>()
107 #Get a conformer
108 pred_ligand = read_pdb(RAW_PDB)
--> 109 best_conf, best_conf_pos, best_conf_err, atoms, nonH_inds, mol, best_conf_id = generate_best_conformer(pred_ligand['chain_coords'], LIGAND)
110
111 #Align it to the prediction

/content/Umol/src/relax/align_ligand_conformer_colab.py in generate_best_conformer(pred_coords, ligand_smiles)
102 nonH_pos = pos[nonH_inds]
103 conf_dmat = np.sqrt(1e-10 + np.sum((nonH_pos[:,None]-nonH_pos[None,:])**2,axis=-1))
--> 104 err = np.mean(np.sqrt(1e-10 + (conf_dmat-pred_dmat)**2))
105 conf_errs.append(err)
106

ValueError: operands could not be broadcast together with shapes (19,19) (2861,2861)`

Any advice would be helpful,
Best regard !

FileNotFoundError

I was running the predict.sh trial for the 7NB4 protein.

however, I have this error that this error and would be happy to get some help?

tensorflow.python.framework.errors_impl.AlreadyExistsError: Another metric with the same name already exists.
Traceback (most recent call last):
File "./src/relax/align_ligand_conformer.py", line 192, in
pred_ligand = read_pdb(args.pred_pdb[0])
File "./src/relax/align_ligand_conformer.py", line 28, in read_pdb
struc = parser.get_structure('',open(pred_pdb,'r'))
FileNotFoundError: [Errno 2] No such file or directory: './data/test_case/7NB4/7NB4_pred_raw.pdb'
grep: ./data/test_case/7NB4/7NB4_pred_raw.pdb: No such file or directory
Traceback (most recent call last):
File "./src/relax/openmm_relax.py", line 13, in
from openff.toolkit import Molecule
ImportError: cannot import name 'Molecule' from 'openff.toolkit' (/homes/cveranso/.conda/envs/Umol/lib/python3.7/site-packages/openff/toolkit/init.py)
Traceback (most recent call last):
File "./src/relax/add_plddt_to_relaxed.py", line 110, in
raw_coords, raw_chains, raw_atom_numbers, raw_3seq, raw_resnos, raw_atoms, raw_bfactors = read_pdb(args.raw_complex[0])
File "./src/relax/add_plddt_to_relaxed.py", line 28, in read_pdb
f=open(pdbname,'rt')
FileNotFoundError: [Errno 2] No such file or directory: './data/test_case/7NB4/7NB4_pred_raw.pdb'
(Umol) cveranso@g02:~/Umol/Umol-master$

(Umol) cveranso@g02:~/Umol/Umol-master$ pip3 install openff.toolkit
Requirement already satisfied: openff.toolkit in /home/disk6/homes/cveranso/.conda/envs/Umol/lib/python3.7/site-packages (0.10.7)
WARNING: There was an error checking the latest version of pip.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.