Giter Site home page Giter Site logo

lsj2408 / transformer-m Goto Github PK

View Code? Open in Web Editor NEW
192.0 192.0 22.0 5.58 MB

[ICLR 2023] One Transformer Can Understand Both 2D & 3D Molecular Data (official implementation)

Home Page: https://arxiv.org/abs/2210.01765

License: MIT License

Python 97.18% Cython 0.59% Shell 0.45% C++ 0.59% Cuda 1.07% Lua 0.12%
general-purpose-molecular-model graph-neural-network graph-transformer molecular-modeling molecule transformer

transformer-m's People

Contributors

lsj2408 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

transformer-m's Issues

How to encode proteins in the PDBbind task?

Very enlightening work. Congratulations on your great achievements in the OGB Challenge! In addition, I noticed that you have made fine-tuning on the PDBbind dataset. How should you encode the protein information? Because proteins usually contain more heavy atoms, do you directly use Transformer-M to encode proteins?

Question about inputting only 2D Data

Hi!

Thank you for introducing such an interesting model to us and sharing the code!

I'm trying to run the model only on 2D structures, would you mind providing a script for using only 2D structures to train the model (Like for PCQM4M-LSC-V2)?

I tried to change the dataset_name and set add_3D to false in the sample train script for 3D data in the readme file, but that doesn't work. I looked into the code and found that in the tasks/graph_prediction.py file , Class GraphPredictionTask, and load_dataset function, when calling BatchedDataDatset, when it set the dataset_version to "2D" for PCQM4M-LSC-V2, it gives the error in criterions/graph_predictions.py line 45: ori_pos = sample['net_input']['batched_data']['pos'], KeyError: 'pos'.

Thank you so much!

load checkpoint when fine-tune QM9

Hi,
I met a problem when I tried to load 'L12-old.pt' in finetuneqm9.sh , the program told me that the checkpoint's structure mismatched, how can I solve this problem?

Difficulties setting up the environment to reproduce results

Hi,

Thank you for the code and the surrounding instructions!

I was trying to reproduce the results but having some difficulty making the environment work.

I installed cuda and other package versions as mentioned but torch_scatter was erroring out with "'NoneType' object has no attribute 'origin'". Looking up online, I uninstalled your recommended version and installed another one with pip install --no-index torch-scatter -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html (even though I have PyTorch 1.7.1). But now, torch sparse decided to error out with

div(float a, Tensor b) -> (Tensor):
  Expected a value of type 'Tensor' for argument 'b' but instead found type 'int'.

  div(int a, Tensor b) -> (Tensor):
  Expected a value of type 'Tensor' for argument 'b' but instead found type 'int'.

The original call is:
  File "/nethome/yjakhotiya3/miniconda3/envs/Transformer-M/lib/python3.7/site-packages/torch_sparse/storage.py", line 316
        idx = self.sparse_size(1) * self.row() + self.col()

        row = torch.div(idx, num_cols, rounding_mode='floor')
              ~~~~~~~~~ <--- HERE
        col = idx % num_cols
        assert row.dtype == torch.long and col.dtype == torch.long

I also tried on other machines but was getting unknown cuda errors by torch distributed (which could be due to an unrelated driver version mismatch issue).

Did you encounter any of these issues or do you have any advice on how to navigate them?

Training on QM9

Hi,

Would it be possible to provide the commands for training a model on QM9 from scratch? This is mentioned in appendix B5 when investigating the effectiveness of pre-training.

Kind regards,

Rob

Performance Issue: Slow read_csv() Function with pandas Version 1.3.4 for CSV Files

Issue Description:
Hello.
I have discovered a performance degradation in the read_csv function of pandas version 1.3.4 when handling CSV files with a large number of columns. This problem significantly increases the loading time from just a few seconds in the previous version 1.2.5 to several minutes, almost 60x diff. I found some discussions on GitHub related to this issue, including #44106 and #44192.
I found that Transformer-M/data/wrapper.py, examples/MMPT/scripts/video_feature_extractor/videoreader.py and examples/MMPT/mmpt/processors/dsprocessor.py used the influenced api.

Steps to Reproduce:

I have created a small reproducible example to better illustrate this issue.

# v1.3.4
import os
import pandas
import numpy
import timeit

def generate_sample():
    if os.path.exists("test_small.csv.gz") == False:
        nb_col = 100000
        nb_row = 5
        feature_list = {'sample': ['s_' + str(i+1) for i in range(nb_row)]}
        for i in range(nb_col):
            feature_list.update({'feature_' + str(i+1): list(numpy.random.uniform(low=0, high=10, size=nb_row))})
        df = pandas.DataFrame(feature_list)
        df.to_csv("test_small.csv.gz", index=False, float_format="%.6f")

def load_csv_file():
    col_names = pandas.read_csv("test_small.csv.gz", low_memory=False, nrows=1).columns
    types_dict = {col: numpy.float32 for col in col_names}
    types_dict.update({'sample': str})
    feature_df = pandas.read_csv("test_small.csv.gz", index_col="sample", na_filter=False, dtype=types_dict, low_memory=False)
    print("loaded dataframe shape:", feature_df.shape)

generate_sample()
timeit.timeit(load_csv_file, number=1)

# results
loaded dataframe shape: (5, 100000)
120.37690759263933
# v1.3.5
import os
import pandas
import numpy
import timeit

def generate_sample():
    if os.path.exists("test_small.csv.gz") == False:
        nb_col = 100000
        nb_row = 5
        feature_list = {'sample': ['s_' + str(i+1) for i in range(nb_row)]}
        for i in range(nb_col):
            feature_list.update({'feature_' + str(i+1): list(numpy.random.uniform(low=0, high=10, size=nb_row))})
        df = pandas.DataFrame(feature_list)
        df.to_csv("test_small.csv.gz", index=False, float_format="%.6f")

def load_csv_file():
    col_names = pandas.read_csv("test_small.csv.gz", low_memory=False, nrows=1).columns
    types_dict = {col: numpy.float32 for col in col_names}
    types_dict.update({'sample': str})
    feature_df = pandas.read_csv("test_small.csv.gz", index_col="sample", na_filter=False, dtype=types_dict, low_memory=False)
    print("loaded dataframe shape:", feature_df.shape)


generate_sample()
timeit.timeit(load_csv_file, number=1)

# results
loaded dataframe shape: (5, 100000)
2.8567268839105964

Suggestion

I would recommend considering an upgrade to a different version of pandas >= 1.3.5 or exploring other solutions to optimize the performance of loading CSV files.
Any other workarounds or solutions would be greatly appreciated.
Thank you!

Errors when evaluate the model

I tried to evaluate the model by using L12 ckpt.
an error occured:
AttributeError: 'Namespace' object has no attribute 'load_qm9'

I added the parameter ‘--load-qm9’ in evaluate.sh and run 'bash evaluate.sh'. A new error is raised:
RuntimeError: Error(s) in loading state_dict for TransformerMModel:
Unexpected key(s) in state_dict: "encoder.molecule_encoder.atom_proc.q_proj.weight", "encoder.molecule_encoder.atom_proc.q_proj.bias", "encoder.molecule_encoder.atom_proc.k_proj.weight", "encoder.molecule_encoder.atom_proc.k_proj.bias", "encoder.molecule_encoder.atom_proc.v_proj.weight", "encoder.molecule_encoder.atom_proc.v_proj.bias", "encoder.molecule_encoder.atom_proc.force_proj1.weight", "encoder.molecule_encoder.atom_proc.force_proj1.bias", "encoder.molecule_encoder.atom_proc.force_proj2.weight", "encoder.molecule_encoder.atom_proc.force_proj2.bias", "encoder.molecule_encoder.atom_proc.force_proj3.weight", "encoder.molecule_encoder.atom_proc.force_proj3.bias".

code associating with finetuneing

论文作者您好!首先感谢您提供了开源的代码,对复现论文帮助很大,模型的效果也很赞!另外想请问下,不知道finetune相关的代码大概什么时候会上传呀?

About Steps Settings

Thank you for your release of Transformer-M. About the settings of steps, warmup_ steps and total_ steps, do I need to divide the settings of steps by the number of GPUs used? Because I observed during training that the number of steps in each epoch of multi -gpu parallel will decrease, but the program still counts the steps according to the single card.

How to fine-tune in the PDBbind

Hi,

It's a very interesting model. Would you mind providing the code for preprocessing the data of PDBbind and fine-tuning?

OSError: /home/miniconda3/envs/Transformer-M/lib/python3.7/site-packages/torch_sparse/_convert_cuda.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr9_M_addrefEv

Hi:
Thanks for sharing the code of your cool work.
The code is OK when I run the train.py in terminal of ubuntu 22.04. However, when I debug the code by VScode, an error occurred: OSError: /home/xx/miniconda3/envs/Transformer-M/lib/python3.7/site-packages/torch_sparse/_convert_cuda.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr9_M_addrefEv.

/home/xx/miniconda3/envs/Transformer-M/lib/python3.7/site-packages/torch_sparse/_convert_cuda.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr9_M_addrefEv
File "/home/xx/Transformer-M-main/Transformer-M/data/dataset.py", line 3, in
from ogb.lsc import PCQM4Mv2Evaluator
File "/home/xx/Transformer-M-main/Transformer-M/tasks/graph_prediction.py", line 37, in
from ..data.dataset import (
File "/home/xx/Transformer-M-main/fairseq/tasks/init.py", line 119, in import_tasks
importlib.import_module(namespace + "." + task_name)
File "/home/xx/Transformer-M-main/fairseq/utils.py", line 511, in import_user_module
import_tasks(tasks_path, f"{module_name}.tasks")
File "/home/xx/Transformer-M-main/fairseq/options.py", line 237, in get_parser
utils.import_user_module(usr_args)
File "/home/xx/Transformer-M-main/fairseq/options.py", line 38, in get_training_parser
parser = get_parser("Trainer", default_task)
File "/home/xx/Transformer-M-main/fairseq_cli/train.py", line 493, in cli_main
parser = options.get_training_parser()
File "/home/xx/Transformer-M-main/train.py", line 14, in
cli_main()
OSError: /home/xx/miniconda3/envs/Transformer-M/lib/python3.7/site-packages/torch_sparse/_convert_cuda.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr9_M_addrefEv

Details about PDBBind~

Thank you for your code! It's well written~
I have a few questions on finetuning task on PDB-Bind. I sincerely look forward to your kindest reply!
1. Inputs. Which features of protein are used as input? and whether pocket data (sub sequence) or full sequence are used?
2. Model architecture. Whether protein data and ligand data are sent into seperate encoders or they are sent into the same encoder? If different encoders are used, what is the type of protein encoder? and how the extracted features of protein and ligand gathered together for later prediction?
Thank you again for your clarifications for them!
btw, may I ask when the finetuning code for PDB bind will be released? Thanks

unrecognized arguments error when using provided evaluation code

Hi, here is what confused me. I tried to run the code provided in README.md:

image

but I got the following error:

evaluate.py: error: unrecognized arguments: --add-3d --num-3d-bias-kernel 128 --droppath-prob 0.1 --act-dropout 0.1 --mode-prob 0.2,0.2,0.6

It seems strange, could you help me with it?

tasks for finetuning qm9

Hi, Thank you for your code for finetuning qm9, I have a small question on the corresponding relationship between task_idx and specific task.
do you mean that the correspondence is as follows?
2a5a3a27e2c9b27ef0cce5efffe46a9
f7692d06c51c1fa6b289dc8e35509bd
Thank you for your kindest reply.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.