Giter Site home page Giter Site logo

atomisticnet / aenet Goto Github PK

View Code? Open in Web Editor NEW
112.0 15.0 40.0 28.1 MB

Atomic interaction potentials based on artificial neural networks

Home Page: http://ann.atomistic.net

License: Mozilla Public License 2.0

Makefile 4.65% Fortran 88.20% C 0.20% Shell 0.38% Gnuplot 0.38% Python 5.29% Cython 0.90%

aenet's Introduction

What is ænet?

The Atomic Energy NETwork (ænet) package (http://ann.atomistic.net) is a collection of tools for the construction and application of atomic interaction potentials based on artificial neural networks (ANN). The ænet code allows the accurate interpolation of structural energies, e.g., from electronic structure calculations, using ANNs. ANN potentials generated with ænet can then be used in larger scale atomistic simulations and in situations where extensive sampling is required, e.g., in molecular dynamics or Monte-Carlo simulations.

License

Copyright (C) 2012-2022 Nongnuch Artrith ([email protected])

The aenet source code is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the Mozilla Public License, v. 2.0, for more details.

Installation

Short installation summary

  1. Compile the L-BFGS-B library

    • Enter the directory “./lib”

      $ cd ./lib

    • Adjust the compiler settings in the “Makefile”

    • Compile the library with

      $ make

    The library file liblbfgsb.a, required for compiling ænet, will be created.

  2. Compile the ænet package

    • Enter the directory “./src”

      $ cd ./src

    • Compile the ænet source code with

      $ make -f makefiles/Makefile.XXX

      where Makefile.XXX is an approproiate Makefile.

      To see a list of available Makefiles just type:

      $ make

    The following executables will be generated in “./bin”:

    • generate.x: generate training sets from atomic structure files
    • train.x: train new neural network potentials
    • predict.x: use existing ANN potentials for energy/force prediction
  3. (Optional) Install the Python interface

    • Enter the directory “./python”

      $ cd ./python

    • Install the Python module with

      $ python setup.py install --user

    This will set up the Python ænet module for the current user, and it will also install the user scripts aenet-predict.py and aenet-md.py.

Detailed installation instructions

Except for a number of Python scripts, ænet is developed in Fortran 95/2003. Generally, the source code is tested with the free GNU Fortran compiler and the commercial Intel Fortran compiler, and the Makefile settings for these two compilers are provided. While the ænet source code should be platform independent, we mainly target Linux and Unix clusters and ænet has not been tested on other operating systems.

ænet requires three external libraries:

  1. BLAS (Basic Linear Algebra Subprograms),
  2. LAPACK (Linear Algebra PACKage),
  3. And the L-BFGS-B optimization routines by Nocedal et al.

Usually, some implementation of BLAS and LAPACK comes with the operating system or the compiler. If that is not the case, the libraries can be obtained from Netlib.org. libblas.a and liblapack.a have to be in the system library path in order to compile ænet.

The L-BFGS-B routines, an implementation of the bounded limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm, is distributed on the homepage of the authors (Nocedal et al.). For the user’s convenience we have decided to distribute the original L-BFGS-B files along with ænet package, so you do not have to actually download the library yourself. However, each application of ænet should also acknowledge the use of the L-BFGS-B library by citing:

R. H. Byrd, P. Lu and J. Nocedal, SIAM J. Sci. Stat. Comp. 16 (1995) 1190-1208.

ænet’s Python interface further relies on NumPy and on the Atomic simulation Environment, so these dependencies have to available when the ænet Python module is set up.

Compilation of external libraries that are distributed with ænet

All external libraries needed by the ænet code are in the directory “./lib”. Currently, only one external library is distributed with ænet, the L-BFGS-B library (see above).

To compile the external libraries

  1. Enter the directory “./lib”

    $ cd ./lib

  2. Adjust the compiler settings in the “Makefile”

    The Makefile contains settings for the GNU Fortran compiler (gfortran) and the Intel Fortran compiler (ifort). Uncomment the section that is appropriate for your system.

  3. Compile the library with

    $ make

The static library “liblbfgsb.a”, required to build ænet, will be created.

Build ænet

The ænet source code is located in “./src”.

  1. Enter “./src”

    $ cd ./src

  2. To see a short explanation of the Makefiles that come with ænet, just run make without any options.

    $ make

    Select the Makefile that is appropriate for your computer.

  3. Compile with

    $ make -f makefiles/Makefile.XXX

    where Makefile.XXX is the selected Makefile.

Three executables will be generated and stored in “./bin”:

  • generate.x: generate training sets from atomic structure files
  • train.x: train new neural network potentials
  • predict.x: use existing ANN potentials for energy/force prediction

Set up the Python interface

  1. Enter the directory “./python”

    $ cd ./python

  2. Install the Python module with

    $ python setup.py install --user

This will set up the Python ænet module for the current user, and it will also install the user scripts aenet-predict.py and aenet-md.py.

aenet's People

Contributors

alexurba avatar nartrith avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aenet's Issues

Training with forces?

Is it possible for aenet to do training on energies and forces? It is not obvious how to specify that in the input files if so.

compile the parallel version using gfortran and gcc (11.2.0) and openmpi-4.1.1

The error messages are listed below:

mpif90 -c -DPARALLEL -O2 -pedantic -fexternal-blas parallel.F90 -o parallel.o
parallel.F90:1603:23:

1603 | call MPI_allreduce(buff, val, 1, MPI_DOUBLE_PRECISION, MPI_SUM, &
| 1
......
1623 | call MPI_allreduce(buff, val, n, MPI_DOUBLE_PRECISION, MPI_SUM, &
| 2
Error: Rank mismatch between actual argument at (1) and actual argument at (2) (rank-1 and scalar)
parallel.F90:1584:23:

1584 | call MPI_allreduce(buff, val, n, MPI_INTEGER, MPI_SUM, &
| 1
......
1623 | call MPI_allreduce(buff, val, n, MPI_DOUBLE_PRECISION, MPI_SUM, &
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (INTEGER(4)/REAL(8)).
parallel.F90:1564:23:

1564 | call MPI_allreduce(buff, val, 1, MPI_INTEGER, MPI_SUM, &
| 1
......
1623 | call MPI_allreduce(buff, val, n, MPI_DOUBLE_PRECISION, MPI_SUM, &
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (INTEGER(4)/REAL(8)).
parallel.F90:1469:21:

1469 | call MPI_Recv(val, 1, MPI_LOGICAL, src, tag, &
| 1
......
1531 | call MPI_Recv(val, nm, MPI_CHARACTER, src, tag, &
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (LOGICAL(4)/CHARACTER(
)).
parallel.F90:1472:21:

1472 | call MPI_Recv(val, 1, MPI_LOGICAL, MPI_ANY_SOURCE, &
| 1
......
1531 | call MPI_Recv(val, nm, MPI_CHARACTER, src, tag, &
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (LOGICAL(4)/CHARACTER(
)).
parallel.F90:1442:21:

1442 | call MPI_Recv(val, n, MPI_DOUBLE_PRECISION, src, tag, &
| 1
......
1531 | call MPI_Recv(val, nm, MPI_CHARACTER, src, tag, &
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (REAL(8)/CHARACTER(
)).
parallel.F90:1445:21:

1445 | call MPI_Recv(val, n, MPI_DOUBLE_PRECISION, MPI_ANY_SOURCE, &
| 1
......
1531 | call MPI_Recv(val, nm, MPI_CHARACTER, src, tag, &
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (REAL(8)/CHARACTER(
)).
parallel.F90:1414:21:

1414 | call MPI_Recv(val, 1, MPI_DOUBLE_PRECISION, src, tag, &
| 1
......
1531 | call MPI_Recv(val, nm, MPI_CHARACTER, src, tag, &
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (REAL(8)/CHARACTER(
)).
parallel.F90:1417:21:

1417 | call MPI_Recv(val, 1, MPI_DOUBLE_PRECISION, MPI_ANY_SOURCE, &
| 1
......
1531 | call MPI_Recv(val, nm, MPI_CHARACTER, src, tag, &
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (REAL(8)/CHARACTER(
)).
parallel.F90:1387:21:

1387 | call MPI_Recv(val, n, MPI_INTEGER, src, tag, &
| 1
......
1531 | call MPI_Recv(val, nm, MPI_CHARACTER, src, tag, &
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (INTEGER(4)/CHARACTER(
)).
parallel.F90:1390:21:

1390 | call MPI_Recv(val, n, MPI_INTEGER, MPI_ANY_SOURCE, &
| 1
......
1531 | call MPI_Recv(val, nm, MPI_CHARACTER, src, tag, &
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (INTEGER(4)/CHARACTER(
)).
parallel.F90:1359:21:

1359 | call MPI_Recv(val, 1, MPI_INTEGER, src, tag, &
| 1
......
1531 | call MPI_Recv(val, nm, MPI_CHARACTER, src, tag, &
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (INTEGER(4)/CHARACTER(
)).
parallel.F90:1362:21:

1362 | call MPI_Recv(val, 1, MPI_INTEGER, MPI_ANY_SOURCE, &
| 1
......
1531 | call MPI_Recv(val, nm, MPI_CHARACTER, src, tag, &
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (INTEGER(4)/CHARACTER(
)).
parallel.F90:1278:18:

1278 | call MPI_Send(val, 1, MPI_LOGICAL, dest, tag, MPI_COMM_WORLD, ierr)
| 1
......
1328 | call MPI_Send(val, mn, MPI_CHARACTER, dest, tag, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (LOGICAL(4)/CHARACTER(
)).
parallel.F90:1257:18:

1257 | call MPI_Send(val, n, MPI_DOUBLE_PRECISION, dest, tag, MPI_COMM_WORLD, ierr)
| 1
......
1328 | call MPI_Send(val, mn, MPI_CHARACTER, dest, tag, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (REAL(8)/CHARACTER(
)).
parallel.F90:1235:18:

1235 | call MPI_Send(val, 1, MPI_DOUBLE_PRECISION, dest, tag, MPI_COMM_WORLD, ierr)
| 1
......
1328 | call MPI_Send(val, mn, MPI_CHARACTER, dest, tag, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (REAL(8)/CHARACTER(
)).
parallel.F90:1214:18:

1214 | call MPI_Send(val, n, MPI_INTEGER, dest, tag, MPI_COMM_WORLD, ierr)
| 1
......
1328 | call MPI_Send(val, mn, MPI_CHARACTER, dest, tag, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (INTEGER(4)/CHARACTER(
)).
parallel.F90:1192:18:

1192 | call MPI_Send(val, 1, MPI_INTEGER, dest, tag, MPI_COMM_WORLD, ierr)
| 1
......
1328 | call MPI_Send(val, mn, MPI_CHARACTER, dest, tag, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (INTEGER(4)/CHARACTER(
)).
parallel.F90:1103:22:

1103 | call MPI_Bcast(val, n, MPI_LOGICAL, root, MPI_COMM_WORLD, ierr)
| 1
......
1159 | call MPI_Bcast(val, m, MPI_CHARACTER, root, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (LOGICAL(4)/CHARACTER(*)).
parallel.F90:1105:22:

1105 | call MPI_Bcast(val, n, MPI_LOGICAL, 0, MPI_COMM_WORLD, ierr)
| 1
......
1159 | call MPI_Bcast(val, m, MPI_CHARACTER, root, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (LOGICAL(4)/CHARACTER(*)).
parallel.F90:1079:22:

1079 | call MPI_Bcast(val, 1, MPI_LOGICAL, root, MPI_COMM_WORLD, ierr)
| 1
......
1159 | call MPI_Bcast(val, m, MPI_CHARACTER, root, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (LOGICAL(4)/CHARACTER(*)).
parallel.F90:1081:22:

1081 | call MPI_Bcast(val, 1, MPI_LOGICAL, 0, MPI_COMM_WORLD, ierr)
| 1
......
1159 | call MPI_Bcast(val, m, MPI_CHARACTER, root, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (LOGICAL(4)/CHARACTER(*)).
parallel.F90:1056:22:

1056 | call MPI_Bcast(val, n, MPI_DOUBLE_PRECISION, root, MPI_COMM_WORLD, ierr)
| 1
......
1159 | call MPI_Bcast(val, m, MPI_CHARACTER, root, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (REAL(8)/CHARACTER(*)).
parallel.F90:1058:22:

1058 | call MPI_Bcast(val, n, MPI_DOUBLE_PRECISION, 0, MPI_COMM_WORLD, ierr)
| 1
......
1159 | call MPI_Bcast(val, m, MPI_CHARACTER, root, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (REAL(8)/CHARACTER(*)).
parallel.F90:1032:22:

1032 | call MPI_Bcast(val, 1, MPI_DOUBLE_PRECISION, root, MPI_COMM_WORLD, ierr)
| 1
......
1159 | call MPI_Bcast(val, m, MPI_CHARACTER, root, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (REAL(8)/CHARACTER(*)).
parallel.F90:1034:22:

1034 | call MPI_Bcast(val, 1, MPI_DOUBLE_PRECISION, 0, MPI_COMM_WORLD, ierr)
| 1
......
1159 | call MPI_Bcast(val, m, MPI_CHARACTER, root, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (REAL(8)/CHARACTER(*)).
parallel.F90:1009:22:

1009 | call MPI_Bcast(val, n, MPI_INTEGER, root, MPI_COMM_WORLD, ierr)
| 1
......
1159 | call MPI_Bcast(val, m, MPI_CHARACTER, root, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (INTEGER(4)/CHARACTER(*)).
parallel.F90:1011:22:

1011 | call MPI_Bcast(val, n, MPI_INTEGER, 0, MPI_COMM_WORLD, ierr)
| 1
......
1159 | call MPI_Bcast(val, m, MPI_CHARACTER, root, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (INTEGER(4)/CHARACTER(*)).
parallel.F90:985:22:

985 | call MPI_Bcast(val, 1, MPI_INTEGER, root, MPI_COMM_WORLD, ierr)
| 1
......
1159 | call MPI_Bcast(val, m, MPI_CHARACTER, root, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (INTEGER(4)/CHARACTER(*)).
parallel.F90:987:22:

987 | call MPI_Bcast(val, 1, MPI_INTEGER, 0, MPI_COMM_WORLD, ierr)
| 1
......
1159 | call MPI_Bcast(val, m, MPI_CHARACTER, root, MPI_COMM_WORLD, ierr)
| 2
Error: Type mismatch between actual argument at (1) and actual argument at (2) (INTEGER(4)/CHARACTER(*)).
make: *** [parallel.o] Error 1

How to generate a .stp file

Hello, I want to use aenet to train a model, I see fingerprint.f90 can compute the structural fingerprints.
While, when I use it with a dir including some xsf file, it report a "segment fault" error. When I use it with one xsf file, it will generate some results, not a stp file. How to use it?
Thanks a lot.

As follows:


$$$$ ./fingerprint.x xsf/structure10000136911.xsf 6.5 12 6.5 4 Ti O
xsf/structure10000136911.xsf 75
xsf/structure10000136911.xsf 75
xsf/structure10000136911.xsf 74
xsf/structure10000136911.xsf 74
xsf/structure10000136911.xsf 70
xsf/structure10000136911.xsf 78
xsf/structure10000136911.xsf 73
xsf/structure10000136911.xsf 78
xsf/structure10000136911.xsf 73
xsf/structure10000136911.xsf 73
xsf/structure10000136911.xsf 77
xsf/structure10000136911.xsf 76
xsf/structure10000136911.xsf 1.42661317E+01 1.48715715E+00 -1.10546093E+01 -2.17259282E+00 5.09528968E+00 6.81191796E-02 -1.23834783E+00 1.48251178E+00 -1.43082665E-01 -1.34007727E+00 2.51005362E-01 4.36856773E-01 1.17738736E-01 9.89171875E+01 -1.00225288E+02 1.29628481E+02 -2.65745459E+02 6.70475042E+02 -4.83391464E+00 -3.57385843E-01 3.70342088E+00 4.06425929E-01 -1.64218313E+00 1.56185631E-01 3.10252400E-01 -2.78981747E-01 2.03700063E-01 5.25495597E-02 -5.75246723E-02 6.51295891E-02 -5.73658127E-01 9.05182379E+00 -8.11461593E+00 7.67151255E+00 -1.42835689E+01 3.89613732E+01

$$$$$$ ./fingerprint.x xsf/ 6.5 12 6.5 4 Ti O
Segmentation fault


Installing python interface error

I followed all the process in the documentation for the installation. I installed the python interface using -- python setup install --user

But when I try to import ANNCalculator using from aenet.ase_calculator import ANNCalculator in the python script, I get the following error: "core.cpython-36m-x86_64-linux-gnu.so: undefined symbol: AENET_ERR_MALLOC"

I tried to compile aenet using gfortran_serial and gfortran_mpi, both are giving the same issue.

Also, does the python interface only work for python3 ?

Python interface for ase is missing?

In an earlier version there was a Python interface that allowed one to use an aenet trained potential in ASE. It seems to be missing in this version. Do you know if the older version still works with this?

Running in parallel?

Hi, I have built an mpi version of aenet via:

make -f makefiles/Makefile.gfortran_mpi

As far as I can see it worked fine, and I have a new train.x-2.0.3-gfortran_mpi executable. It is not obvious to me it does anything in parallel though. If I run it like this:

mpirun -np 6 train.x-2.0.3-gfortran_mpi train.in

I can't tell that it is running on 6 cores, and only energies.test.0 and energies.train.0 are created (I thought perhaps 6 of them might be created).

Is there anything you need to put in the input file, or other thing that is required to run aenet in parallel? Thanks,

Angular fingerprint calculation with Chebyshev polynomials

I am reading the source code of fingerprint calculation. In line 817 of
src/ext/sfbasis.f90

f = chebyshev_polynomial(cos_ijk, 0.0d0, PI, sfb%a_order)

chebyshev_polynomial takes cos_ijk as input and cos_ijk is from

cos_ijk = dot_product(R_ij, R_ik)/(d_ij*d_ik)

The cosine value ranges from -1 to 1, but in line 817, [0.0, PI] is used. Is there any special concerns for this? Besides, why not use radian instead of cosine?

Issue trying to setup python interface

After compiling aenet successfully, I followed the instructions in the README.rst in the python3/ folder to install aenet-predict.py and aenet-md.py. I used the following commands:

$ cd ../src
$ make -f ./makefiles/Makefile.XXX lib
$ cd -
$ python setup.py build_ext --inplace

and when I try and run aenet-md.py I get the following error which looks like something has gone wrong in using cython for the ANNPotentials class:

Traceback (most recent call last):
  File "/u/fs1/afh41/.conda/envs/ml-tools/bin/aenet-predict.py", line 4, in <module>
    __import__('pkg_resources').run_script('aenet==0.1.0a1', 'aenet-predict.py')
  File "/u/fs1/afh41/.conda/envs/ml-tools/lib/python3.7/site-packages/pkg_resources/__init__.py", line 667, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/u/fs1/afh41/.conda/envs/ml-tools/lib/python3.7/site-packages/pkg_resources/__init__.py", line 1464, in run_script
    exec(code, namespace, namespace)
  File "/home/users/afh41/.local/lib/python3.7/site-packages/aenet-0.1.0a1-py3.7-linux-x86_64.egg/EGG-INFO/scripts/aenet-predict.py", line 40, in <module>
    from aenet.ase_calculator import ANNCalculator
  File "/u/fs1/afh41/.local/lib/python3.7/site-packages/aenet-0.1.0a1-py3.7-linux-x86_64.egg/aenet/ase_calculator.py", line 24, in <module>
    from aenet.core import ANNPotentials
ImportError: /u/fs1/afh41/.local/lib/python3.7/site-packages/aenet-0.1.0a1-py3.7-linux-x86_64.egg/aenet/core.cpython-37m-x86_64-linux-gnu.so: undefined symbol: mpi_comm_rank_

ld: library not found for -lgfortran

Please help me how to solve this problem, thank you for your help in advance.

ahmadfaisalharish@Ahmads-MacBook-Pro ~ % cd Downloads
ahmadfaisalharish@Ahmads-MacBook-Pro Downloads % cd aenet-master
ahmadfaisalharish@Ahmads-MacBook-Pro aenet-master % cd lib
ahmadfaisalharish@Ahmads-MacBook-Pro lib % make
gcc -shared Lbfgsb.3.0/blas_pic.o Lbfgsb.3.0/lbfgsb_pic.o Lbfgsb.3.0/linpack_pic.o Lbfgsb.3.0/timer_pic.o -lm -lgfortran -o liblbfgsb.so
ld: library not found for -lgfortran
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [liblbfgsb.so] Error 1

Python3 setup error

Hello,
I compile aenet with intel fortran compiler.
but during install process,
python for using ase don't set up.

here for my code with error
$ python3.9 setup.py build_ext --inplace

running build_ext
skipping 'aenet/core.c' Cython extension (up-to-date)
building 'aenet.core' extension
gcc -pthread -B /usr/local/miniconda3/compiler_compat -Wl,--sysroot=/ -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /usr/local/miniconda3/include -fPIC -O2 -isystem /usr/local/miniconda3/include -fPIC -I/home/chkim/.local/lib/python3.9/site-packages/numpy/core/include -I/usr/local/miniconda3/include/python3.9 -c aenet/core.c -o build/temp.linux-x86_64-3.9/aenet/core.o -I../src -I./aenet -fPIC -O2
In file included from /home/chkim/.local/lib/python3.9/site-packages/numpy/core/include/numpy/ndarraytypes.h:1960:0,
from /home/chkim/.local/lib/python3.9/site-packages/numpy/core/include/numpy/ndarrayobject.h:12,
from /home/chkim/.local/lib/python3.9/site-packages/numpy/core/include/numpy/arrayobject.h:5,
from aenet/core.c:701:
/home/chkim/.local/lib/python3.9/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
#warning "Using deprecated NumPy API, disable it with "
^
aenet/core.c:2908:27: warning: ‘__pyx_f_5aenet_4core__chars’ defined but not used [-Wunused-function]
static __Pyx_memviewslice __pyx_f_5aenet_4core__chars(PyObject *__pyx_v_s) {
^
gcc -pthread -B /usr/local/miniconda3/compiler_compat -Wl,--sysroot=/ -shared -Wl,-rpath,/usr/local/miniconda3/lib -Wl,-rpath-link,/usr/local/miniconda3/lib -L/usr/local/miniconda3/lib -Wl,-rpath,/usr/local/miniconda3/lib -Wl,-rpath-link,/usr/local/miniconda3/lib -L/usr/local/miniconda3/lib build/temp.linux-x86_64-3.9/aenet/core.o ../lib/Lbfgsb.3.0/blas_pic.o ../lib/Lbfgsb.3.0/lbfgsb_pic.o ../lib/Lbfgsb.3.0/linpack_pic.o ../lib/Lbfgsb.3.0/timer_pic.o -llapack -lblas -lifort -o /home/chkim/aenet/python3/aenet/core.cpython-39-x86_64-linux-gnu.so
/usr/local/miniconda3/compiler_compat/ld: cannot find -llapack
/usr/local/miniconda3/compiler_compat/ld: cannot find -lblas
/usr/local/miniconda3/compiler_compat/ld: cannot find -lifort
collect2: error: ld returned 1 exit status
error: command '/usr/bin/gcc' failed with exit code 1

I think it is an error for using intel's
Could you help how i doing for this problem??

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.