Giter Site home page Giter Site logo

softposit.jl's Introduction

SoftPosit.jl

CI DOI

A pure Julia implementation for posit arithmetic. Posit numbers are an alternative to floating-point numbers. Posits extend floats by introducing regime bits that allow for a higher precision around one, yet a wide dynamic range of representable numbers. For further information see posithub.org.

v0.5 respects the 2022 Standard for Posit Arithmetic but drops quire support. v0.4 implements the previous standard, has quire support but depends on the C implementation of SoftPosit.

If this library doesn't support a desired functionality or for anything else, please raise an issue.

Installation

In the Julia REPL do

julia>] add SoftPosit

where ] opens the package manager. Then simply using SoftPosit which enables all of the functionality.

Posit formats

SoftPosit.jl emulates and exports the following Posit number formats

Posit8, Posit16, Posit32
Posit16_1

Posit8, Posit16, Posit32 are the standard formats with 2 exponent bits. The off-standard format Posit(16,1) (16 bits with 1 exponent bit, exported as Posit16_1) was part of the previous posit arithmetic draft standard.

For all the formats conversions between integers and floats and basic arithmetic operations +, -, *, / and sqrt (among others) are defined.

Examples

Conversion to and from Float64 and computing a square root

julia> using SoftPosit
julia> p = Posit16(16)
Posit16(16.0)

julia> sqrt(p)
Posit16(4.0)

And the bitwise representation split into sign, regime, exponent and mantissa bits using bitstring(p,:split)

julia> bitstring(Posit32(123456.7),:split)
"0 111110 00 11100010010000001011010"

Or solving a linear equation system with Posit8

julia> A = Posit8.(randn(3,3))
3×3 Matrix{Posit8}:
 Posit8(1.125)      Posit8(-0.5625) Posit8(0.0390625)
 Posit8(-1.5)       Posit8(0.0625)  Posit8(1.25)
 Posit8(-0.40625)   Posit8(1.875)   Posit8(1.125)

julia> b = Posit8.(randn(3))
3-element Vector{Posit8}:
 Posit8(1.25)
 Posit8(-1.375)
 Posit8(-0.6875)

julia> A\b
3-element Vector{Posit8}:
 Posit8(1.0)
 Posit8(-0.21875)
 Posit8(0.125)

For an (outdated) comprehensive notebook covering (almost) all the functionality of SoftPosit.jl please read softposit_examples.ipynb

Rounding mode

Following the 2022 posit standard, posits should never underflow nor overflow. This is in v0.5 generally respected, but there are some caveats: Posits currently do underflow below about 4*floatmin of the float format you are converting from. In practice this is of little importance as even floatmin(::PositN)^2 is larger than that

julia> floatmin(Posit16)
Posit16(1.3877788e-17)

julia> floatmin(Posit16)*floatmin(Posit16)
Posit16(1.3877788e-17)

and similar for other posit formats. So in Posit16 arithmetic we have 1e-17*1e-17 = 1e-17 (no underflow) and 1e17*1e17 = 1e17 (no overflow).

Citation

If you use this package please cite us

Klöwer M, PD Düben and TN Palmer, 2020. Number formats, error mitigation and scope for 16-bit arithmetics in weather and climate modelling analyzed with a shallow water model, Journal of Advances in Modeling Earth Systems, 12, e2020MS002246. 10.1029/2020MS002246

softposit.jl's People

Contributors

adamryczkowski avatar giordano avatar juliatagbot avatar milankl avatar oscardssmith avatar waldyrious avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

softposit.jl's Issues

Benchmarking

This is to summarize the performance of SoftPosit.jl measured via the conversion to and from Posit16 to and from Float32

v0.3 (SoftPosit-C) v0.4 (SoftPosit.jl) v0.5*
P16->F32 32ns 0.76ns 0.65ns
F32->P16 100ns 1.1ns 0.865ns
P16->F32->P16 120ns 1.9ns 1.4ns

*upcoming release which will include #68, the new 2022 posit standard and type-flexible conversions such that all PositN(::FloatN) conversion use a single function (with multiple-dispatch). Tested via

julia> using SoftPosit, BenchmarkTools
julia> function f!(::Type{TB},A::Array{TA}) where {TB,TA}
           @inbounds for i in eachindex(A)
               A[i] = TA(TB(A[i]))
           end
       end
julia> function f!(B::Array{TB},A::Array{TA}) where {TB,TA}
                  @inbounds for i in eachindex(A,B)
                      B[i] = TB(A[i])
                  end
              end

julia> A = Posit16.(rand(UInt16,1000000));
julia> B = rand(Float32,1000000);
julia> @btime f!($B,$A);
julia> @btime f!($A,$B);
julia> @btime f!($Float32,$A);
julia> @btime f!($Posit16,$B);

Arithmetic operations: power

Sofar, things like Posit16(1.23)^2 are not supported. I guess the best way would be to include promotion of integers to posits. The SoftPosit library does not have a power function but we might be able to implement this by a concatenation of multiplications/divisions. Priority on operations like power 2.

MethodError: no method matching Posit16(::BigFloat)

Hello Milan,

I am learning to program in Julia and trying to use the package DifferentialEquations.jl (see https://github.com/SciML/DifferentialEquations.jl) in conjunction with SoftPosit.jl.

According to the author of DifferentialEquations.jl, you can use any custom type as long as some basic operators are defined. However, I encounter the error message MethodError: no method matching Posit16(::BigFloat) when trying to solve a simple ODE problem using posits (as initial condition and time array). In fact, the same error occurs for me when defining a variable like big_float = convert(BigFloat, 1.0) and then constructing a 16-bit Posit with Posit16(big_float). This suggests it has to do with SoftPosit.jl. The very same error message is thrown when using the alternative SigmoidNumbers.jl package.

Do you have an idea what the issue might be here?

Many thanks in advance and kind regards,

Alexandros

FFTs with posits

Base.maxintfloat(::Type{Posit}) has to be defined.

julia> using SoftPosit, FastTransforms

julia> x = Posit16.(rand(8))
8-element Vector{Posit16}:
 Posit16(0.6254883)
 Posit16(0.7780762)
 Posit16(0.8145752)
 Posit16(0.50598145)
 Posit16(0.74768066)
 Posit16(0.29797363)
 Posit16(0.89624023)
 Posit16(0.6437988)

julia> fft(x)
ERROR: MethodError: no method matching maxintfloat(::Type{Posit16})

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Conversion of number to Posit8 returns NaR

Hello,
I found unexpected results when converting a normal floating point number to a Posit8, I believe a large number should return max_posit instead of NaR

julia> using SoftPosit

julia> Posit8(1e9)
NaR

Have a nice day

Posit32(true) fails

Using Matrix to make a dense matrix out of the Q output of the qr function wants Posit32(true) to work:

julia> using SoftPosit
julia> Qt,R = qr(Posit32.(randn(3,3)));
julia> Q = Matrix(Qt)

You're welcome to ignore this.

Arithmetic operations are not performed

Hi again,
I'm sorry, but there is another issue which I thought to mention in a different post. After resolving the method errors following your suggestions in the previous Issue post, the computation ran but wouldn't terminate! Many potential reasons of course, but I discovered that simple arithmetic operations of Posit16 values don't work on my Mac (M1 processor):

julia> a = Posit16(1.0);
julia> a + a
Posit16(1.0)

The same lines on the Windows machine give

julia> a = Posit16(1.0);
julia> a + a
Posit16(2.0)

Any ideas why this would happen? Recall that I installed the via Main to get SoftPosit.jl working on the Mac...

wrong sign function is called

@giordano Do you know why the sign function in src/constants.jl

function sign(p::Type{T}) where {T <: AbstractPosit}
    if signbit(p)       # negative and infinity case
        if isfinite(p)  # negative
            return minusone(T)
        else            # infinity
            return notareal(T)
        end
    else                # positive and zero case
        if iszero(p)    # zero
            return zero(T)
        else            # positive
            return one(T)
        end
    end
end

is not called for e.g. sign(::Posit8)? Some other method (probably for the supertype AbstractFloat is called that returns for sign(notareal(Posit8)) actually Posit8(0xc0) which is minus one (and makes sense in a float sense as the sign bit is actually negative). If, instead, I define sign(::Posit8) directly without parametric types it actually calls that method. I therefore assume that there is simply some conflict in the method preferences, which I currently don't understand. Any ideas?

PositX1 conversion

Unfortunately, the PositX1 format (with 1 exponent bit, internally stored as 32bit) has poor conversion support. Posit8_1(Posit32(1.23)) causes an error for example.

Why sizeof(Posit8_1) == 4?

I feel I missed something obvious. Shouldn't sizeof(Posit8_x)==1 for all x?

On the other hand, sizeof(Posit8)==1, as expected. Can you explain it to me, or point to the documentation?

Steps to reproduce:

using Pkg
Pkg.add(PackageSpec(url="https://github.com/milankl/SoftPosit.jl"))
using SoftPosit
sizeof(Posit8_1)

exp, log, sin, cos, tan support

Although the SoftPosit C-library does not have trigonometric functions implemented yet, we should consider a back and forth conversion as an intermediate work-around

sin(x::Posit16) = Posit16(sin(Float64(x)))

etc. Converting to Float32 should in theory work too as the dynamic range is slightly wider than for Posit32.

Comparison

Is this a correct behavior?

julia> Posit8(1.f0) > 0
false

julia> Posit8(1.f0) < 0
true

standard 2022 available

There is a new standard document available, which outdates the draft link given in README.
Probably it would be worth while adapting the software - look for an adapted version of the underlying C-library.

posithub

Support for QuadGK

It'd cool to be able to use this package with QuadGK, which enables numerical integration with custom Julia types.

However, this currently doesn't work because the numeric type must be an AbstractFloat or a type that can be converted to an AbstractFloat. This initial issue would be solved by #14.

After that, the first issue I've faced is that there is no eps method for posit numbers:

julia> quadgk(x -> x ^ 2, Posit16(0), Posit16(1))
ERROR: MethodError: no method matching eps(::Type{Posit16})

Precompilation warnings

julia> using SoftPosit
[ Info: Precompiling SoftPosit [0775deef-a35f-56d7-82da-cfc52f91364d]
WARNING: Method definition (::Type{Int64})(SoftPosit.AbstractPosit) in module SoftPosit at /Users/milan/.julia/packages/SoftPosit/JY6kx/src/conversions.jl:72 overwritten at /Users/milan/.julia/packages/SoftPosit/JY6kx/src/conversions.jl:73.
  ** incremental compilation may be fatally broken for this module **

WARNING: Method definition (::Type{Int64})(SoftPosit.AbstractPosit) in module SoftPosit at /Users/milan/.julia/packages/SoftPosit/JY6kx/src/conversions.jl:73 overwritten at /Users/milan/.julia/packages/SoftPosit/JY6kx/src/conversions.jl:74.
  ** incremental compilation may be fatally broken for this module **

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.