Giter Site home page Giter Site logo

mpnum's Introduction

mpnum

A matrix product representation library for Python

JOSS PyPI Travis Documentation Status Coverage Status Maintainability

mpnum is a flexible, user-friendly, and expandable toolbox for the matrix product state/tensor train tensor format. mpnum provides:

  • support for well-known matrix product representations, such as:
    • matrix product states (MPS), also known as tensor trains (TT)
    • matrix product operators (MPO)
    • local purification matrix product states (PMPS)
    • arbitrary matrix product arrays (MPA)
  • arithmetic operations: addition, multiplication, contraction etc.
  • compression, canonical forms, etc.
  • finding extremal eigenvalues and eigenvectors of MPOs (DMRG)
  • flexible tools for new matrix product algorithms

To install the latest stable version run

pip install mpnum

If you want to install mpnum from source, please run (on Unix)

git clone https://github.com/dseuss/mpnum.git
cd mpnum
pip install .

In order to run the tests and build the documentation, you have to install the development dependencies via

pip install -r requirements.txt

For more information, see:

Required packages:

  • six, numpy, scipy

Supported Python versions:

  • 2.7, 3.4, 3.5, 3.6

Alternatives:

How to contribute

Contributions of any kind are very welcome. Please use the issue tracker for bug reports. If you want to contribute code, please see the section on how to contribute in the documentation.

Contributors

License

Distributed under the terms of the BSD 3-Clause License (see LICENSE).

Citations

If you use mpnum for yor paper, please cite:

Suess, Daniel and Milan Holzäpfel (2017). mpnum: A matrix product representation library for Python. Journal of Open Source Software, 2(20), 465, https://doi.org/10.21105/joss.00465

mpnum has been used and cited in the following publications:

mpnum's People

Contributors

dsuess avatar milan-hl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

mpnum's Issues

__iter__ and __getattr__

The current behaviour might lead to some unexpected results, such as the following. Define

y = np.random.randn(1)
x = random_mpa(4, 2 ,1)

Then, y[0] * x gives a numpy array (since the numpy's multiplication with an iterator has precedence), but x * y[0] is an MPArray.

Ideas:

  • create view objects for local tensors and physical tensor
  • don't provide iter
  • let getattr of a tuple (or a single number) return partial MPAs

Action of `mpa.compress()` without arguments

I expected that mpa.compress() without arguments would not perform compression except maybe ensure normalization. This would be useful in order to make compression optional: No arguments cause no compression to be performed.

However, both mpa.compress() and mpa.compression() perform SVD compression without bdim and relerr, i.e. they truncate singular values which are equal to zero. This does not occur frequently. In many cases, this will not reduce the bond dimension by more than a simple mpa.normalize() and CPU time is wasted.

I suggest to change change the two methods to not run SVD compression if method='SVD' is not specified. @dseuss, do you agree?

Dependencies

Remove sphinx and pytest from the normal dependencies, since they are not necessary to use the library. Especially sphinx takes a long time to install on travis (but necessary for the mpnum/_docs.py).

Suggestion: Move the HTMLTranslator somewhere where it makes more sense or remove it and scale the svg pics appropriately.

Unify implementations of sweeping algorithms

Sweeping algorithms:

  1. SVD compression
  2. Variational compression
  3. Minimal eigenvalue computation
    Number 2 and 3 may involve operations on supersites which need to be split after being modified (in the same way in both cases).

Possible goal for unified implementation: Make adaptive versions of the new implementation easy to implement. (E.g. using the compression error when splitting supersites, or other values.)

Normal form vs Normalized

I am slowly getting angry with my past self for calling the cannoncial form as normal form (and, hence, introducing functions such as MPArray.normalize(...). This will be a big change in the API, but I think we should think about a new naming scheme.

And while we are making ourselves unhappy, I would also think about renaming bdims to ranks and pdims to dims. I think the distinction would help readability a little

mp.linalg.mineig?

What is mineig actually computing? If it really is the minimal eigenvalue, the following example should give 0

In [1]: import mpnum as mp
In [2]: a = mp.mps_to_mpo(mp.random_mps(6, 4, 1))
In [3]: mp.linalg.mineig(a, eigs_opts={'maxiter': 5000})
Out[3]:
((0.99999999999999956-4.9034074076101329e-17j),
  <mpnum.mparray.MPArray at 0x1067ceba8>)

Numerical accuracy

Compression tests fail randomly for different instances.

  • Is the assert_mpa_identical too strict?
  • We should check seeding since this is not reproducible.

Tutorial

This will be so great contribution
For beginners
If there are video tutorial to guide explanation for code

Index convention for MPOs

Hello,
I'm trying to utilize mpnum's efficient implementation of the "sandwich" function by importing specifically defined MPSs and MPOs into the MPArray data structure. However, the documentation never mentions the index convention for the sandwich function. Specifically, the two MPSs will have sites with shape (Dvirtual, Dphys, Dvirtual), while the MPO will have (Dvirtual, Dphys, Dphys, Dvirtual). For the construction <mps1 | mpo | mps2> , which of these physical legs on the MPO points toward mps2 and which points toward mps1? I believe this is important because just flipping the order of the physical legs on each site does not even properly correspond to taking the transpose of the MPO (we also need to reverse the MPO), so this index order matters even for operators that are only real valued. Obviously, if the operator has complex values this is even more problematic.

Thanks

inner() is inconsistent with numpy

numpy behaivour:

>>> import numpy as np
>>> u = np.array([1j, 0])
>>> v = np.array([1, 0])
>>> np.inner(u, v)
1j

inner() does not complex-conjugate the first argument. In mpnum, it does:

>>> import mpnum as mp
>>> mpu = mp.MPArray.from_array(u, plegs=1)
>>> mpv = mp.MPArray.from_array(v, plegs=1)
>>> mp.inner(mpu, mpv)
array(-1j)

This is not consistent with numpy. Is this desirable?

In numpy, there is also vdot(), which complex-conjugates its first argument:

>>> np.dot(u, v)
1j
>>> np.vdot(u, v)
-1j
>>> mp.dot(mpu, mpv).to_array()
array(1j)

Custom Exceptions

Currently, we only work with assertions/standard exceptions. For better error tracking, it would be better to define mpnum specific exceptions.

Compression return values

What should we return from compression procedures? Make sure that we don't too much waste time with computing information we don't always use!

Custom svd function

Hi everyone,
I've just noticed that, in mparray.py, and specifically in compress_svd_l and compress_svd_r, with relerr we use svd instead of svdfunc, which does not enable to use a different svd function respect numpy.linalg.svd.

Screenshot from 2021-02-03 12-40-42

Unwanted canonical form in tests

I just noticed that many tests still use s.th. like

mps = random_mpa(...)
mps /= mp.norm(mps)

This is a bad idea since the call to norm brings the mps into full left-canoncial form, which might be unintended. So the somewhat right call would be

mps /= mp.norm(mps.copy())

However, the correct thing to do is to use the normalized=True argument provided by most factory functions.

Default dtype

Is there any reason not to have a real dtype as default value for all the generators? I always get confused since this would be numpy's standard behavior.

Variational vs SVD compression

In 656fa29, we move the comparison between the two methods in an extra method. Since the quality of var compression strongly depends on the quality of the initial guess, right now, I don't see a sensible test. Therefore, the test is currently skipped.

mpnum for windows

i am trying to use mpnum on windows
i wish if any help to install mpnum window

Fix mpnum.special.sumup

The current implementation of mpnum.special.sumup is probably not the best. [1, Eq. (153)] suggests a more efficient algorithm for compression after summation. We should try it out.

[1] U. Schollwöck, “The density-matrix renormalization group in the age of matrix product states,” Annals of Physics, vol. 326, no. 1, pp. 96–192, Jan. 2011.

Floating point overflow

At the moment, the result of a / mp.norm(a) may unexpectedly be zero or NaN for large MPArrays (or small MPArrays with large tensor entries):

>>> import numpy as np
>>> import mpnum as mp
>>>
>>> a = mp.MPArray.from_kron([np.array([10.0])] * 308)
>>> mp.norm(a / mp.norm(a))
0.0
>>> b = mp.MPArray.from_kron([np.array([10.0])] * 309)
>>> mp.norm(b / mp.norm(b))
nan

The reason is that floats cannot exceed a certain value:

>>> np.finfo(float).max
1.7976931348623157e+308
>>> a.lt[-1], mp.norm(a)
(array([[[  1.00000000e+308]]]), inf)
>>> b.lt[-1], mp.norm(b)
(array([[[ inf]]]), inf)
>>> 

It would be nice to add a method which computes a / mp.norm(a) without running into the floating point overflow.

Underflow can also happen:

>>> for n in 161, 162, 323, 324:
...     a = mp.MPArray.from_kron([np.array([0.1])] * n)
...     print('{}   {!r:25} {!r:25} {!r:20}'.format(
...         n, mp.norm(a), a.lt[-1].flat[0], mp.norm(a / mp.norm(a))))
... 
161   9.9404793228621183e-162   1.0000000000000097e-161   1.0059877069510206  
162   0.0                       1.0000000000000097e-162   inf                 
323   0.0                       9.8813129168249309e-324   inf                 
324   0.0                       0.0                       nan                 
>>> 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.