Giter Site home page Giter Site logo

semodi / neuralxc Goto Github PK

View Code? Open in Web Editor NEW
33.0 33.0 10.0 18.75 MB

Implementation of a machine learned density functional

License: BSD 3-Clause "New" or "Revised" License

Batchfile 0.04% Shell 0.49% Python 99.47%
chemistry dft machine-learning neural-networks pyscf siesta

neuralxc's People

Contributors

semodi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

neuralxc's Issues

basis nworkers error

Changing nworkers=1 to anything above 1 in the basis.json file causes the following error for any molecule I try it on:

====== Iteration 0 ======
Calculating 15 systems on
LocalCluster(d5f9cd8d, 'tcp://127.0.0.1:42955', workers=2, threads=2, memory=16.70 GB)
Traceback (most recent call last):
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/distributed/worker.py", line 3398, in dumps_function
    result = cache_dumps[func]
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/distributed/utils.py", line 1523, in __getitem__
    value = super().__getitem__(key)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/collections/__init__.py", line 991, in __getitem__
    raise KeyError(key)
KeyError: <function in_private_dir.<locals>.wrapper_private_dir at 0x7f66774bd950>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/distributed/protocol/pickle.py", line 49, in dumps
    result = pickle.dumps(x, **dump_kwargs)
AttributeError: Can't pickle local object 'in_private_dir.<locals>.wrapper_private_dir'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/awills/anaconda3/envs/nxc/bin/neuralxc", line 7, in <module>
    exec(compile(f.read(), __file__, 'exec'))
  File "/home/awills/Documents/Research/neuralxc/bin/neuralxc", line 266, in <module>
    func(**args_dict)
  File "/home/awills/Documents/Research/neuralxc/neuralxc/drivers/model.py", line 209, in sc_driver
    kwargs=engine_kwargs)
  File "/home/awills/Documents/Research/neuralxc/neuralxc/preprocessor/driver.py", line 141, in driver
    results = calculate_distributed(atoms, app, dir, kwargs, nworkers)
  File "/home/awills/Documents/Research/neuralxc/neuralxc/preprocessor/driver.py", line 106, in calculate_distributed
    [app] * len(atoms), [kwargs] * len(atoms))
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/distributed/client.py", line 1774, in map
    actors=actor,
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/distributed/client.py", line 2542, in _graph_to_futures
    dsk = dsk.__dask_distributed_pack__(self, keyset)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/dask/highlevelgraph.py", line 939, in __dask_distributed_pack__
    client_keys,
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/dask/highlevelgraph.py", line 392, in __dask_distributed_pack__
    dsk = toolz.valmap(dumps_task, dsk)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/toolz/dicttoolz.py", line 83, in valmap
    rv.update(zip(d.keys(), map(func, d.values())))
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/distributed/worker.py", line 3436, in dumps_task
    return {"function": dumps_function(task[0]), "args": warn_dumps(task[1:])}
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/distributed/worker.py", line 3400, in dumps_function
    result = pickle.dumps(func, protocol=4)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/distributed/protocol/pickle.py", line 60, in dumps
    result = cloudpickle.dumps(x, **dump_kwargs)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/cloudpickle/cloudpickle_fast.py", line 102, in dumps
    cp.dump(obj)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/cloudpickle/cloudpickle_fast.py", line 563, in dump
    return Pickler.dump(self, obj)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/pickle.py", line 409, in dump
    self.save(obj)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/cloudpickle/cloudpickle_fast.py", line 745, in save_function
    *self._dynamic_function_reduce(obj), obj=obj
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/cloudpickle/cloudpickle_fast.py", line 682, in _save_reduce_pickle5
    dictitems=dictitems, obj=obj
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/pickle.py", line 610, in save_reduce
    save(args)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/pickle.py", line 751, in save_tuple
    save(element)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/pickle.py", line 736, in save_tuple
    save(element)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/pickle.py", line 476, in save
    f(self, obj) # Call unbound method with explicit self
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/dill/_dill.py", line 1169, in save_cell
    f = obj.cell_contents
ValueError: Cell is empty

Travis CI Security Breach Notice

MolSSI is reaching out to every repository created from the MolSSI Cookiecutter-CMS with a .travis.yml file present to alert them to a potential security breach in using the Travis-CI service.

Between September 3 and September 10 2021, the Secure Environment Variables Travis-CI uses were leaked for ALL projects and injected into the publicly available runtime logs. See more details here. All Travis-CI users should cycle any secure variables/files, and associated objects as soon as possible. We are reaching out to our users in the name of good stewards of the third-party products we recommended and might still be in use and provide a duty-to-warn to our end-users given the potential severity of the breach.

We at MolSSI recommend moving away from Travis-CI to another CI provider as soon as possible. The nature of this breach and the way the response was mis-handled by Travis-CI, MolSSI cannot recommend the Travis-CI platform for any reason at this time. We suggest either GitHub Actions (as is used from v1.5 of the Cookiecutter-CMS) or some other service offered on GitHub.

If you have already addressed this security concern or it does not apply to you, feel free to close this issue.

This issue was created programmatically to reach as many potential end-users as possible. We do apologize if this was sent in error.

OSError: Unable to create link (name already exists)

====== Iteration 0 ======

Running SCF calculations ...
-----------------------------

converged SCF energy = -76.3545540706806
converged SCF energy = -76.3508207847105
converged SCF energy = -76.355707764332
converged SCF energy = -76.356824320776
converged SCF energy = -76.3739444533522
converged SCF energy = -76.3695047518221
converged SCF energy = -76.3694359857328
converged SCF energy = -76.3496333319949
converged SCF energy = -76.3557216068751
converged SCF energy = -76.3662805538731

Projecting onto basis ...
-----------------------------

workdir/0/pyscf.chkpt
workdir/1/pyscf.chkpt
workdir/2/pyscf.chkpt
workdir/3/pyscf.chkpt
workdir/4/pyscf.chkpt
workdir/5/pyscf.chkpt
workdir/6/pyscf.chkpt
workdir/7/pyscf.chkpt
workdir/8/pyscf.chkpt
workdir/9/pyscf.chkpt
10 systems found, adding 97a66c91908d8f76f249705362d9e536
10 systems found, adding energy
10 systems found, adding energy

Baseline accuracy
-----------------------------

{'mae': 0.05993, 'max': 0.09156, 'mean deviation': 0.0, 'rmse': 0.06635}

Fitting initial ML model ...
-----------------------------

Using symmetrizer  trace
Fitting 4 folds for each of 3 candidates, totalling 12 fits
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.737958  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.013578  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.012651  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.011192  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.009135  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.006535  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.003770  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.001574  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.000567  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.000380  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.000298  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.000238  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.000191  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.000144  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.000111  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.000086  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.000066  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.000051  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.000039  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.000101  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.000088  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.688702  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.005803  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.004593  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.004208  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.004022  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.003779  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.003422  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.002935  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.002444  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.002092  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.001836  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.001651  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.001514  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.001407  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.001320  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.001247  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.001183  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.001126  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.001074  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.001024  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.000981  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.172542  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.003400  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.002257  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.001851  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.001511  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.001225  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.000936  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.000696  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.000498  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.000386  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.000261  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.000219  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.000208  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.000247  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.000201  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.000199  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.001895  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.000876  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.000205  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.000184  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.000181  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.276985  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.001160  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.000992  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.000924  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.000869  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.000834  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.000810  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.000787  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.000763  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.000737  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.000716  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.000682  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.000656  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.000630  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.000606  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.000584  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.000562  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.000541  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.000522  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.000504  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.000487  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.624111  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.006432  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.005881  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.005858  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.005857  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.005869  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.005868  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.005859  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.005864  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.005861  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 1.096901  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.013148  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.005045  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.007128  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.007204  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 00014: reducing learning rate of group 0 to 1.0000e-04.
Epoch 13000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 15000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 16000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 17000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 18000 ||  Training loss : 0.007109  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 19000 ||  Training loss : 0.007109  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 20000 ||  Training loss : 0.007108  Validation loss : 0.000000  Learning rate: 0.0001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.441285  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.006409  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.006473  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.001
Epoch 00013: reducing learning rate of group 0 to 1.0000e-04.
Epoch 12000 ||  Training loss : 0.006475  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 14000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 15000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 16000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 17000 ||  Training loss : 0.006471  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 18000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 19000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 20000 ||  Training loss : 0.006472  Validation loss : 0.000000  Learning rate: 0.0001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.706089  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.009280  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.006735  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.006113  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.005982  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.005990  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 00017: reducing learning rate of group 0 to 1.0000e-04.
Epoch 16000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 18000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 19000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 20000 ||  Training loss : 0.005973  Validation loss : 0.000000  Learning rate: 0.0001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.692270  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.003213  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.001989  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.001688  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.001691  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.001728  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.001731  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.001728  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 00015: reducing learning rate of group 0 to 1.0000e-04.
Epoch 14000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 16000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 17000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 18000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 19000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 20000 ||  Training loss : 0.001727  Validation loss : 0.000000  Learning rate: 0.0001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.419515  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.006821  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.003581  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.002444  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.002266  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.002343  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.002457  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.002566  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.002680  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.002774  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.002822  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.002834  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.002836  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.002836  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.001
Epoch 00016: reducing learning rate of group 0 to 1.0000e-04.
Epoch 15000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 17000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 18000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 19000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 20000 ||  Training loss : 0.002837  Validation loss : 0.000000  Learning rate: 0.0001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 1.116178  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.017524  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.010454  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.009555  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.008318  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.006758  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.005142  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.003908  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.003240  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.002890  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.002633  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.002399  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.002211  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.002099  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.002061  Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.002053  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.002051  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.002051  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.002051  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.002051  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.002051  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.585081  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.008857  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.005610  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.004271  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.003184  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.002515  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.002223  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.002111  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.002070  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.002063  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.002062 Overwritten attributes  get_veff  of <class 'pyscf.dft.rks.RKS'>
 Validation loss : 0.000000  Learning rate: 0.001
Epoch 15000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 17000 ||  Training loss : 0.002109  Validation loss : 0.000000  Learning rate: 0.001
Epoch 18000 ||  Training loss : 0.003392  Validation loss : 0.000000  Learning rate: 0.001
Epoch 19000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Epoch 20000 ||  Training loss : 0.002062  Validation loss : 0.000000  Learning rate: 0.001
Activation unknown, defaulting to GELU
ModuleDict(
  (X): Linear(in_features=19, out_features=1, bias=True)
)
Epoch 0 ||  Training loss : 0.401415  Validation loss : 0.000000  Learning rate: 0.001
Epoch 1000 ||  Training loss : 0.004100  Validation loss : 0.000000  Learning rate: 0.001
Epoch 2000 ||  Training loss : 0.003189  Validation loss : 0.000000  Learning rate: 0.001
Epoch 3000 ||  Training loss : 0.003003  Validation loss : 0.000000  Learning rate: 0.001
Epoch 4000 ||  Training loss : 0.002946  Validation loss : 0.000000  Learning rate: 0.001
Epoch 5000 ||  Training loss : 0.002949  Validation loss : 0.000000  Learning rate: 0.001
Epoch 6000 ||  Training loss : 0.002954  Validation loss : 0.000000  Learning rate: 0.001
Epoch 7000 ||  Training loss : 0.002955  Validation loss : 0.000000  Learning rate: 0.001
Epoch 8000 ||  Training loss : 0.002957  Validation loss : 0.000000  Learning rate: 0.001
Epoch 9000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.001
Epoch 10000 ||  Training loss : 0.002957  Validation loss : 0.000000  Learning rate: 0.001
Epoch 11000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.001
Epoch 12000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.001
Epoch 13000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.001
Epoch 14000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.001
Epoch 00016: reducing learning rate of group 0 to 1.0000e-04.
Epoch 15000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.001
Epoch 16000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 17000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 18000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 19000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.0001
Epoch 20000 ||  Training loss : 0.002956  Validation loss : 0.000000  Learning rate: 0.0001


====== Iteration 1 ======
Using symmetrizer  trace
Success!

Running SCF calculations ...
-----------------------------

NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3529189373852
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3482785744807
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3557948619506
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3566029382129
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3773105081291
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3717756096372
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3728823731244
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3468435329547
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.355468924666
NeuralXC: Loading model from /home/egezer/neuralxc/examples/quickstart/sc/model_it1.jit
NeuralXC: Model successfully loaded
converged SCF energy = -76.3696142171586

Projecting onto basis...
-----------------------------

workdir/0/pyscf.chkpt
workdir/1/pyscf.chkpt
workdir/2/pyscf.chkpt
workdir/3/pyscf.chkpt
workdir/4/pyscf.chkpt
workdir/5/pyscf.chkpt
workdir/6/pyscf.chkpt
workdir/7/pyscf.chkpt
workdir/8/pyscf.chkpt
workdir/9/pyscf.chkpt
10 systems found, adding 97a66c91908d8f76f249705362d9e536
Traceback (most recent call last):
  File "/home/egezer/.local/bin/neuralxc", line 7, in <module>
    exec(compile(f.read(), __file__, 'exec'))
  File "/home/egezer/neuralxc/bin/neuralxc", line 240, in <module>
    func(**args_dict)
  File "/home/egezer/neuralxc/neuralxc/drivers/model.py", line 266, in sc_driver
    pre_driver(
  File "/home/egezer/neuralxc/neuralxc/drivers/other.py", line 210, in pre_driver
    add_data_driver(hdf5=file, system=system, method=method, density=filename, add=[], traj=xyz, override=True)
  File "/home/egezer/neuralxc/neuralxc/drivers/data.py", line 81, in add_data_driver
    obs(observable, zero)
  File "/home/egezer/neuralxc/neuralxc/drivers/data.py", line 74, in obs
    add_density((density.split('/')[-1]).split('.')[0], file, data, system, method, override)
  File "/home/egezer/neuralxc/neuralxc/datastructures/hdf5.py", line 19, in add_density
    return add_data(key, *args, **kwargs)
  File "/home/egezer/neuralxc/neuralxc/datastructures/hdf5.py", line 97, in add_data
    create_dataset()
  File "/home/egezer/neuralxc/neuralxc/datastructures/hdf5.py", line 94, in create_dataset
    cg.create_dataset(which, data=data)
  File "/home/egezer/.local/lib/python3.10/site-packages/h5py/_hl/group.py", line 139, in create_dataset
    self[name] = dset
  File "/home/egezer/.local/lib/python3.10/site-packages/h5py/_hl/group.py", line 371, in __setitem__
    h5o.link(obj.id, self.id, name, lcpl=lcpl, lapl=self._lapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5o.pyx", line 202, in h5py.h5o.link
OSError: Unable to create link (name already exists)

spec_agnostic error

Using the template files provided in examples/example_scripts/train_model, but on aspirin or ethanol from sGDML's datasets yields the following error. Changing spec_agnostic from false to true doesn't produce the error.

...
Epoch 2000 ||  Training loss : 0.034451  Validation loss : 0.000000  Learning rate: 1.0000000000000002e-07
21
(9, 15, 160)
(4, 15, 160)
(8, 15, 36)
====== Iteration 1 ======
Using symmetrizer  trace
Traceback (most recent call last):
  File "/home/awills/Documents/Research/neuralxc/neuralxc/ml/pipeline.py", line 245, in serialize_pipeline
    projector = DensityProjector(basis_instructions=basis_instructions, unitcell=unitcell_c, grid=grid_c)
  File "/home/awills/Documents/Research/neuralxc/neuralxc/projector/projector.py", line 42, in DensityProjector
    return registry[projector_type](**kwargs)
TypeError: __init__() missing 1 required positional argument: 'mol'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/awills/Documents/Research/neuralxc/neuralxc/ml/pipeline.py", line 251, in serialize_pipeline
    grid_weights=grid_c)
  File "/home/awills/Documents/Research/neuralxc/neuralxc/projector/projector.py", line 42, in DensityProjector
    return registry[projector_type](**kwargs)
TypeError: __init__() missing 1 required positional argument: 'mol'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/awills/anaconda3/envs/nxc/bin/neuralxc", line 7, in <module>
    exec(compile(f.read(), __file__, 'exec'))
  File "/home/awills/Documents/Research/neuralxc/bin/neuralxc", line 266, in <module>
    func(**args_dict)
  File "/home/awills/Documents/Research/neuralxc/neuralxc/drivers/model.py", line 249, in sc_driver
    'radial' in pre['preprocessor'].get('projector_type', 'ortho'))
  File "/home/awills/Documents/Research/neuralxc/neuralxc/drivers/model.py", line 131, in serialize
    xc.ml.pipeline.serialize_pipeline(model, jit_path, override=True)
  File "/home/awills/Documents/Research/neuralxc/neuralxc/ml/pipeline.py", line 260, in serialize_pipeline
    serialize_energy(model, C, outpath, override)
  File "/home/awills/Documents/Research/neuralxc/neuralxc/ml/pipeline.py", line 172, in serialize_energy
    e_models[spec] = torch.jit.trace(epred, c, check_trace=False)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/torch/jit/_trace.py", line 742, in trace
    _module_class,
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/torch/jit/_trace.py", line 940, in trace_module
    _force_outplace,
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/torch/nn/modules/module.py", line 887, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/torch/nn/modules/module.py", line 860, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/awills/Documents/Research/neuralxc/neuralxc/ml/pipeline.py", line 142, in forward
    return self.model(C)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/torch/nn/modules/module.py", line 887, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/torch/nn/modules/module.py", line 860, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/torch/nn/modules/container.py", line 119, in forward
    input = module(input)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/torch/nn/modules/module.py", line 887, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/home/awills/anaconda3/envs/nxc/lib/python3.6/site-packages/torch/nn/modules/module.py", line 860, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/home/awills/Documents/Research/neuralxc/neuralxc/ml/transformer.py", line 129, in forward
    return self.transform(X, wrap_torch=False)
  File "/home/awills/Documents/Research/neuralxc/neuralxc/ml/transformer.py", line 79, in transform
    results_dict[spec] = self._spec_dict[spec].transform(x[spec])
KeyError: 'X'

AttributeError: 'GroupedStandardScaler' object has no attribute 'threshold'

====== Iteration 0 ======

Running SCF calculations ...
-----------------------------

converged SCF energy = -76.3545540706806
converged SCF energy = -76.3508207847106
converged SCF energy = -76.3557077643318
converged SCF energy = -76.3568243207759
converged SCF energy = -76.3739444533523
converged SCF energy = -76.369504751822
converged SCF energy = -76.3694359857327
converged SCF energy = -76.349633331995
converged SCF energy = -76.3557216068751
converged SCF energy = -76.3662805538731

Projecting onto basis ...
-----------------------------

workdir/0/pyscf.chkpt
workdir/1/pyscf.chkpt
workdir/2/pyscf.chkpt
workdir/3/pyscf.chkpt
workdir/4/pyscf.chkpt
workdir/5/pyscf.chkpt
workdir/6/pyscf.chkpt
workdir/7/pyscf.chkpt
workdir/8/pyscf.chkpt
workdir/9/pyscf.chkpt
10 systems found, adding 97a66c91908d8f76f249705362d9e536
10 systems found, adding energy
10 systems found, adding energy

Baseline accuracy
-----------------------------

{'mae': 0.05993, 'max': 0.09156, 'mean deviation': 0.0, 'rmse': 0.06635}

Fitting initial ML model ...
-----------------------------

Using symmetrizer  trace
Traceback (most recent call last):
  File "/home/egezer/.local/bin/neuralxc", line 7, in <module>
    exec(compile(f.read(), __file__, 'exec'))
  File "/home/egezer/neuralxc/bin/neuralxc", line 240, in <module>
    func(**args_dict)
  File "/home/egezer/neuralxc/neuralxc/drivers/model.py", line 216, in sc_driver
    statistics_fit = fit_driver(preprocessor='pre.json',
  File "/home/egezer/neuralxc/neuralxc/drivers/model.py", line 358, in fit_driver
    grid_cv = get_grid_cv(hdf5, pre, inputfile, spec_agnostic=pre['preprocessor'].get('spec_agnostic', False))
  File "/home/egezer/neuralxc/neuralxc/ml/utils.py", line 301, in get_grid_cv
    hyper = to_full_hyperparameters(hyper, pipeline.get_params())
  File "/home/egezer/.local/lib/python3.10/site-packages/sklearn/pipeline.py", line 167, in get_params
    return self._get_params("steps", deep=deep)
  File "/home/egezer/.local/lib/python3.10/site-packages/sklearn/utils/metaestimators.py", line 50, in _get_params
    for key, value in estimator.get_params(deep=True).items():
  File "/home/egezer/.local/lib/python3.10/site-packages/sklearn/base.py", line 211, in get_params
    value = getattr(self, key)
  File "/home/egezer/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1207, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'GroupedStandardScaler' object has no attribute 'threshold'

possible memory leak when parallel

when running in parallel, during the machine learning iteration process the RAM usage during iteration one was stable around ~9GB. when the second iteration process started, the RAM usage increased until my system ran out. this doesn't happen when n_workers=1 in the json files

here are my json files:
hyperparameters.json

{
    "hyperparameters": {
        "var_selector__threshold": 1e-10,
        "scaler__threshold": null,
        "estimator__n_nodes": 8, 
        "estimator__n_layers": 3,
        "estimator__b": 1e-2,
        "estimator__alpha": 0.001,
        "estimator__max_steps": 2001,
        "estimator__valid_size": 0,
        "estimator__batch_size": 0,
        "estimator__activation": "GELU"
    },
    "cv": 3,
    "n_workers": 2,
    "threads_per_worker": 1,
    "n_jobs": 1
}

basis_sgdml_asp.json

{
    "preprocessor": {
	"basis": "ccpvdz-jkfit",
	"extension": "chkpt",
    "application": "pyscf",
    "spec_agnostic": false,
    "projector_type":"pyscf",
    "symmetrizer_type":"trace"
    },
    "engine_kwargs": {
        "xc": "PBE",
        "basis": "ccpvdz"
    },
    "n_workers": 2
}

Problems when using Brillouin zone sampling with SIESTA

Using Brillouin zone sampling, SIESTA erroneously passes atomic coordinates for the entire supercell to NeuralXC. To avoid double counting within the unitcell this needs to be fixed so that only atomic positions within unitcell are passed.
Fix should happen within SIESTA and patch provided here updated accordingly.

Question about how to evaluate a trained model

Hi, I recently started trying this repo and found it really cool!
I have managed to run the example in examples/example_scripts/train_model/ on some data and would like to use the final model to evaluate some other molecules. I know that the neuralxc sc ... command can do the testing if I provide a testing.traj.
However, I'd like to use the neuralxc eval ... command so I that I don't have to re-train the same model.
The --hdf5 argument requires the path to hdf5 file, baseline data, reference data. I assume the last one refers to a testing.traj like the one used with neuralxc sc ... in the example. However, I not sure what the first two files refer to and how to get them and couldn't find an example in the repo. Could you please give some advice or examples?

Moreover, I'm wondering how to set n_max and l_max as mentioned in the paper. I can't seem to find these options in the hyperparameters.json or the basis.json file.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.