Comments (9)
By default DeepH-pack use the dense matrix to compute the eigenvalues. One should use sparse matrix to perform the calculation for large-scale materials like TBG with 1.05 degree. Please set
[basic]
dense_calc = False
in your inference parameters.
By the way, I just updated a parallel version of sparse calculation script, which is X (X ≈ the number of CPU cores) times faster than the original sparse calculation script. One should update the DeepH-pack and install Pardiso.jl and LinearMaps.jl by following the updated README to use it.
One needs about 80GB of memory to calculate 50 bands of TBG with 11908 atoms.
from deeph-pack.
@mzjb Thank you for your response, I will try as soon as possible.
from deeph-pack.
By default DeepH-pack use the dense matrix to compute the eigenvalues. One should use sparse matrix to perform the calculation for large-scale materials like TBG with 1.05 degree. Please set
[basic] dense_calc = False
in your inference parameters.
By the way, I just updated a parallel version of sparse calculation script, which is X (X ≈ the number of CPU cores) times faster than the original sparse calculation script. One should update the DeepH-pack and install Pardiso.jl and LinearMaps.jl by following the updated README to use it.
One needs about 80GB of memory to calculate 50 bands of TBG with 11908 atoms.
Hello, I have a question about the parallel version of parsing calculation: How can I utilize all of the CPU cores (such as 64 cores)? Are there any parameters in the .ini file that I can adjust to achieve this?
from deeph-pack.
By default DeepH-pack use the dense matrix to compute the eigenvalues. One should use sparse matrix to perform the calculation for large-scale materials like TBG with 1.05 degree. Please set
[basic] dense_calc = False
in your inference parameters.
By the way, I just updated a parallel version of sparse calculation script, which is X (X ≈ the number of CPU cores) times faster than the original sparse calculation script. One should update the DeepH-pack and install Pardiso.jl and LinearMaps.jl by following the updated README to use it.
One needs about 80GB of memory to calculate 50 bands of TBG with 11908 atoms.Hello, I have a question about the parallel version of parsing calculation: How can I utilize all of the CPU cores (such as 64 cores)? Are there any parameters in the .ini file that I can adjust to achieve this?
Use
set_nprocs!(ps, 64)
after line 45 of this file to set the number of threads to 64. I found that the default value is the number of cpu cores when I use Intel oneapi MKL.
(https://github.com/JuliaSparse/Pardiso.jl#mkl-pardiso-1)
from deeph-pack.
Hi, I have found that Julia 1.5.4 does not support Pardiso 0.5.4 version, and the function "fix_iparm!" will not be executed. This issue has been resolved by updating Julia to version 1.8.5, but I cannot confirm if the program you wrote supports the syntax of 1.8.5. Please take note of this issue.
from deeph-pack.
Thank you for your reminder. I am actually using julia 1.6.6, and I forgot to update the README.
Hi, I have found that Julia 1.5.4 does not support Pardiso 0.5.4 version, and the function "fix_iparm!" will not be executed. This issue has been resolved by updating Julia to version 1.8.5, but I cannot confirm if the program you wrote supports the syntax of 1.8.5. Please take note of this issue.
from deeph-pack.
Hi,After changing the keyword to
[basic] dense_calc = False
I have a new question: The resulting band structure appears to contain numerous sawtooth-shaped bands, which is evidently incorrect. I suspect that the Hamiltonian matrix may not be as sparse as I originally thought, or perhaps my radius is set too large. Specifically, my radius is currently set to 9.
Can you give me some advice?
from deeph-pack.
Hi,After changing the keyword to
[basic] dense_calc = False
I have a new question: The resulting band structure appears to contain numerous sawtooth-shaped bands, which is evidently incorrect. I suspect that the Hamiltonian matrix may not be as sparse as I originally thought, or perhaps my radius is set too large. Specifically, my radius is currently set to 9.
Can you give me some advice?
Hi there, thank you for raising this issue. I'm another developer of DeepH (Zechen Tang) and am responding to your question.
I believe there's no primary error in your calculation. The reason for seeing sawtooth-shaped bands lies in the incorrect ordering of bands. In the dense_calc mode, all eigenvalues are calculated and thus indexed in a correct manner. In the sparse diagonalization scheme, however, only a few eigenvalues near Fermi level are calculated, which are in general not all bands. It is very likely that these bands are labelled with incorrect indexes.
If you're using matplotlib.pyplot.plot to plot band diagram, the eigenvalues with the same "index" along different k-points are recognized as the same band and are joined together to form a line. This can result in the appearance of a sawtooth pattern so long as the indexes are incorrect.
Here are two ways to solve this issue:
- Use scattering plot (matplotlib.pyplot.scatter) instead of line plot. In this way you'll be able to see the shape of the band without being bothered by the sawtooth.
- For gapped systems, you can manually assign an energy level in a gap, and sort all VBMs and CBMs by there distance to this energy level. This could give a correct "index" of all calculated eigenvalues, and will result in a correct line plot.
Unfortunately the second approach would involve some coding dependent on the way you organize your band eigenvalue output, and we don't have a general script on this. I would recommend you try the first approach instead.
We sincerely appreciate you for bringing up this issue. If you have any further questions, we would be more than happy to provide you with additional support.
from deeph-pack.
@aaaashanghai Thank you very much for your response, it completely dispelled my doubts!
from deeph-pack.
Related Issues (20)
- in to raise ValueError(f"Invalid format: `{str(fmt)}`") HOT 1
- whether DeepH support parallel computing (for example using mpirun) HOT 4
- demo with ABACUS v3.1 HOT 2
- Necessary files could not be found in OLP_dir HOT 2
- Inference 5.sparse_calc error in julia HOT 3
- demo_abacus with ABACUS 3.1 HOT 5
- Merge DeepH-E3 and xDeepH into the current repository
- Error while preprocessing in TBB HOT 2
- question on the train paramter HOT 2
- Failed to preprocess HOT 9
- It can cause band structure instability when using ABACUS' high LCAO base orbitals. HOT 2
- a user warning during inference step HOT 3
- question about van der Waals heterojunctions HOT 1
- an Assertion error in step 3 get_pred_Hamiltonian of Inference part HOT 4
- error while 1.parse_Overlap in interfere part HOT 8
- question about step4_overlap HOT 1
- error while preprocess in demo-abacus HOT 2
- failure while preprocess in demo-abacus HOT 6
- deeph-inference --config inference.ini error for ABACUS3.0.0 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deeph-pack.