Giter Site home page Giter Site logo

bodigrim / poly Goto Github PK

View Code? Open in Web Editor NEW
65.0 8.0 6.0 405 KB

Fast polynomial arithmetic in Haskell (dense and sparse, univariate and multivariate, usual and Laurent)

Home Page: http://hackage.haskell.org/package/poly

License: BSD 3-Clause "New" or "Revised" License

Haskell 100.00%
sparse-univariate-polynomials dense-univariate-polynomials sparse-multivariate-polynomials polynomials polynomial-arithmetic polynomial-multiplication polynomial-division

poly's People

Contributors

bodigrim avatar dependabot[bot] avatar konsumlamm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

poly's Issues

test/Dense.hs:251:54: error: Not in scope: ‘S.integral’

For Stackage LTS 14:

    Building test suite 'poly-tests' for poly-0.3.2.0..
    [2 of 4] Compiling Dense
    
    /var/stackage/work/unpack-dir/unpacked/poly-0.3.2.0-0a08b17044344cfa9923aefddad623987d12ed2d25738f5acb46f2ec71f8610d/test/Den
se.hs:251:54: error:
        Not in scope: ‘S.integral’
        Module ‘Data.Poly.Semiring’ does not export ‘integral’.
        |
    251 |     \(p :: Poly V.Vector Rational) -> integral p === S.integral p
        |                                                      ^^^^^^^^^^

Substitute polynomial instead of X

eval :: (Num a, G.Vector v a) => Poly v a -> a -> a
eval = substitute (*)
{-# INLINE eval #-}

subst :: (Eq a, Num a, G.Vector v a, G.Vector w a) => Poly v a -> Poly w a -> Poly w a
subst = substitute (scale 0)
{-# INLINE subst #-}

substitute :: (G.Vector v a, Num b) => (a -> b -> b) -> Poly v a -> b -> b
substitute f (Poly cs) x = fst' $
  G.foldl' (\(acc :*: xn) cn -> acc + f cn xn :*: x * xn) (0 :*: 1) cs
{-# INLINE substitute #-}

Note that substitute is suspiciously similar to foldr modulo constraints. There is also pure = monomial 0 and one can write join :: Poly v (Poly v a) -> Poly v a.

Test more laws

As per new release of quickcheck-classes-0.6.3.0:

  • numLaws,
  • isListLaws,
  • gcdDomainLaws,
  • euclideanLaws,
  • showLaws.

Reuse Data.Euclidean.gcdExt

I would like to reuse gcdExt from semirings somehow. There are two possible avenues:

  1. Decomission Data.Poly.gcdExt and Data.Poly.Sparse.gcdExt entirely. This seems to be the cleanest solution in the long run, but it has a drawback: Data.Euclidean.gcdExt does not normalize polynomials as scaleMonic does. A client could possibly normalize the output on his side (if he actually cares about having it normalized).

  2. Reuse Data.Euclidean.gcdExt in Data.Poly.gcdExt and Data.Poly.Sparse.gcdExt, but still apply scaleMonic and leave both entities exported. This is backwards compatible, but kinda annoying: anyone importing Data.Euclidean and Data.Poly together would need to hide either of gcdExt, leading to an awfully huge chance to use the wrong one (because on many inputs they return the same values).

At the moment I am more inclined to follow the first option.

@Multramate what do you think about it? Would it be difficult to normalize polynomials in galois-field?

Euclidean methods are unergonomic with Fractional

Perhaps I'm doing things wrong, but in order to implement quotRem for my newtype over VPoly a, I had to do the following

coercePoly :: (Eq b, Num b) => Coercible a b => Poly.VPoly a -> Poly.VPoly b
coercePoly = Poly.toPoly . coerce . Poly.unPoly

quotRem :: forall a . (Eq a, Fractional a) => Polynomial a -> Polynomial a -> (Polynomial a, Polynomial a)
quotRem (Polynomial p1) (Polynomial p2) = (Polynomial (coercePoly q), Polynomial (coercePoly r))
  where
    (q,r) = Euclidean.quotRem p1' p2'
    p1' :: Poly.VPoly (Euclidean.WrappedFractional a) = coercePoly p1
    p2' :: Poly.VPoly (Euclidean.WrappedFractional a) = coercePoly p2

There are two issues: firstly, the need to use WrappedFractional is a little cumbersome. It would be a bit easier if something like coercePoly were included, or better if there were an instance Fractional a => Euclidean (VPoly a). The other issue is that coercePoly requires using toPoly, which includes extra constraints and traverses the coefficients. It would be nice if there were an unsafe way to map over the coefficients, which could be used when the user knows the mapping function will never take nonzero values to zero values.

Add an inverse of scale

Implement in internal Semiring-based modules

unscale :: (Eq a, Euclidean a, G.Vector v a) :: Word -> a -> Poly v a -> Poly v a

such that

unscale a b . scale a b = id 

Fix tests against QuickCheck-2.14.3

All
  Orthogonal
    differential equations
      jacobi:                                           FAIL (0.01s)
        *** Failed! Exception: 'Ratio has zero denominator' (after 8 tests and 1 shrink):
        (-16) % 3
        4 % 3
        Exception thrown while showing test case: 'Ratio has zero denominator'
        Use --quickcheck-replay=431599 to reproduce.
        Use -p '/differential equations.jacobi/' to rerun this test only.
      gegenbauer:                                       FAIL
        *** Failed! Exception: 'Ratio has zero denominator' (after 2 tests):
        (-1) % 1
        Exception thrown while showing test case: 'Ratio has zero denominator'
        Use --quickcheck-replay=143703 to reproduce.
        Use -p '/differential equations.gegenbauer/' to rerun this test only.
    normalization
      jacobi:                                           FAIL
        *** Failed! Exception: 'Ratio has zero denominator' (after 4 tests):
        (-2) % 1
        0 % 1
        Exception thrown while showing test case: 'Ratio has zero denominator'
        Use --quickcheck-replay=550024 to reproduce.
        Use -p '/normalization.jacobi/' to rerun this test only.
      gegenbauer:                                       FAIL
        *** Failed! Exception: 'Ratio has zero denominator' (after 2 tests):
        (-1) % 1
        Exception thrown while showing test case: 'Ratio has zero denominator'
        Use --quickcheck-replay=16628 to reproduce.
        Use -p '/normalization.gegenbauer/' to rerun this test only.
  MultiLaurent
    eval
      eval (p + q) r = eval p r + eval q r:             FAIL
        *** Failed! Exception: 'Ratio has zero denominator' (after 6 tests and 16 shrinks):
        ShortPoly {unShortPoly = 0}
        ShortPoly {unShortPoly = 1 % 1 * X^-1}
        Vector [0 % 1,0 % 1,0 % 1]
        Exception thrown while showing test case: 'Ratio has zero denominator'
        Use --quickcheck-replay=671565 to reproduce.
        Use -p '/MultiLaurent.eval.eval (p + q) r = eval p r + eval q r/' to rerun this test only.

5 out of 1164 tests failed (19.68s)

Extended GCD of polynomials

  • Extended GCD of dense polynomials.
  • Extended GCD of sparse polynomials.
  • Benchmark extended GCD on binary polynomials.
  • (won't do) Efficient implementation of extended GCD for dense polynomials, using mutable vectors.

Folding or evaluating polynomials

foldP  :: Num a      => Poly a -> a -> a
foldP' :: Semiring a => Poly a -> a -> a
-- pseudo code
foldP f(x) v = f(v)
foldP (a + bx + cx^2 + ...) v = a + b v + c v^2 + ...
-- if there is some other type of polynomial we can fold into it, where v above would be v(X) = X
-- I think this can be used to compose polynomials also

(Strict variants?) It should be possible to have x^n as part of the accumulator so at each step it's only one addition and one multiplication.
I thought about foldMapP :: Num r => (a -> r) -> Poly a -> r -> r but it won't necessarily preserve the meaning of the polynomial, e.g. foldP p a + foldP q a == foldP (p + q) a. (Similarly for other ring operations I believe.) I'm not too well versed but I think the input function would need to be a "ring homomorphism".

Add a definition for the identity polynomial

Add a definition for the identity polynomial.

var :: Num a => Poly a
var = Poly [0, 1]

Allows writing e.g. 3*var^2 + 5*var + 2. If other univariate polynomial types are added (e.g. the other issues) this could be a class. It could also be used to fold polynomial types. (I'll make another issue for this, evaluation is one example.)
It might be possible to use pattern synonyms to have a concise name that isn't likely to conflict with other variables. Though this generates an extra Eq constraint (and can be used to pattern match):

pattern X = Poly [0, 1]
-- 3*X^2 + 5*X + 2 
-- :: (Eq a, Num a) => Poly a

Multivariate polynomials

One can model multivariate polynomials in poly as several layers of univariate polynomials. E. g., polynomials over [X, Y] can be represented as polynomials over X, which coefficients themselves are polynomials over Y. However, such representation is very awkward to work with.

Let's implement sparse multivariate polynomials with some basic arithmetic (provided by means of Semiring and Ring instances). I propose to represent them as

module Poly.Sparse.Multi where 

import qualified Data.Vector.Sized.Unboxed as SU

newtype MultiPoly (n :: Nat) (v :: Type -> Type) (a :: Type) = MultiPoly
  { unMultiPoly :: v (SU.Vector n Word, a)
  }

Here n stands for the number of variables, v is a flavour of the underlying vector type and a are coefficients.

Add integral to Semiring-based APIs

Implement Field-based version of integral and expose it from Data.Poly.Semiring and Data.Poly.Sparse.Semiring.

integral :: (Eq a, Field a, G.Vector v a) => Poly v a -> Poly v a
integral :: (Eq a, Field a, G.Vector v (Word, a)) => Poly v a -> Poly v a

definition of `degree` in Data.Poly.Internal.Dense.Field is nonstandard

The module Data.Poly.Internal.Dense.Field defines degree as

  degree (Poly xs) = fromIntegral (G.length xs)

For example, degree X = 2 and degree 1 = 1. This is not the standard definition of polynomial degree, which has degree_standard X = 1 and degree_standard 1 = 0 and could cause confusion. Is there a reason it is implemented this way?

More flexible access to underlying vector

Following the discussion with @DaveBarton

  1. Provide some unsafe wrappers / unwrappers.
  2. Provide helpers for common operations: snoc, unsnoc, monicize (multiply by the inverse of the leading coefficient, which is required to be a unit), and shift (like scale but the coef is 1).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.