Giter Site home page Giter Site logo

patashu / break_eternity.js Goto Github PK

View Code? Open in Web Editor NEW
117.0 9.0 43.0 876 KB

A Javascript numerical library to represent numbers as large as 10^^1e308 and as small as 10^-10^^1e308. Sequel to break_infinity.js, designed for incremental games.

License: MIT License

JavaScript 77.06% TypeScript 22.94%
number numbers biginteger bignumber bignum bignumbers bignums bigdecimal decimal incremental-game

break_eternity.js's People

Contributors

bbugh avatar dan-simon avatar dependabot-preview[bot] avatar dependabot[bot] avatar loader3229 avatar mathcookie17 avatar mcpower avatar naruyoko avatar patashu avatar reinhardt-c avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

break_eternity.js's Issues

Decimal.toString("(e^N)X") returns NaN as the sign.

Expected result:

new Decimal("(e^123)456").sign
1
new Decimal("-(e^123)456").sign
-1
new Decimal("(e^123)456").toString()
"(e^123)456"
new Decimal("-(e^123)456").toString()
"-(e^123)456"

Actual result:

new Decimal("(e^123)456").sign
NaN
new Decimal("-(e^123)456").sign
NaN
new Decimal("(e^123)456").toString()
"(e^123)456"
new Decimal("-(e^123)456").toString()
"(e^123)456"

ssqrt returns 1 for too large numbers (layer ~ Number.MAX_SAFE_INTEGER).

Expected result:

new Decimal.fromComponents(1,1000000000000000,10).ssqrt().toString()
"(e^999999999999998)10000000000"
new Decimal.fromComponents(1,10000000000000000,10).ssqrt().toString()
"(e^9999999999999998)10000000000"

Actual result:

new Decimal.fromComponents(1,1000000000000000,10).ssqrt().toString()
"(e^999999999999998)10000000000"
new Decimal.fromComponents(1,10000000000000000,10).ssqrt().toString()
"1"

Exact problem threshold point occurs when layer is around Number.MAX_SAFE_INTEGER.

3 edge cases and non-normalized Decimals

edge cases

There are 3 edge cases with number input to new Decimal/Decimal.fronNumber: Infinity, -Infinity and NaN.
To see why, try this code:

new Decimal(Infinity).eq(Decimal.dInf);
new Decimal(-Infinity).eq(Decimal.dNegInf);
new Decimal(NaN).new(Decimal.dNaN);

You probably expect all 3 statements to be true. Unfortunely, they all evaluate to false. Why? The fromNumber Decimal method sets layer to 0 no matter what. After all fields are set, we normalize. But the normaluze method only increments layer once if mag is above 9e15 - even if Decimal's mag was Infinity/-Infinity/NaN. So we end up with Decimals with layer 1 - while Decimal.dInf has layer Infinity, Decimal.dNegInf has layer -Infinity and Decimal.dNaN has layer NaN.
Let's try something else:

new Decimal(1e100).gt(NaN);
new Decimal("1e500").gt(Infinity);

Most users would probably expect these to be false - honestly at first I would too. Unfortunely, these are both true. Again, because Infinity/-Infinity/NaN isn't properly handled in the fromNumber method. I can think of 3 solutions:
1.Tell users to use Decimal.dInf, Decimal.dNegInf and Decimal.dNaN constants.
2.handle Infinity/-Infinity/NaN properly in the fromNumber method.
3.probably the worst option, change all methods to be aware of Decimals like {sign: 1, mag: Infinity, layer: 0} created by passing Infinity/-Infinity/NaN as input to new Decimal.

non-normalized Decimals

The Decimal.fromComponents_noNormalize method can be used to create non-normalized Decimals. Not normalizing can be desired if you are sure the Decimal won't require normalization - for example, the Decimal.dConst constants that re set by library itself.
However, this let's users create arbitary non-normalized Decimals:

Decimal.fromComponents_noNormalize(NaN, 10, 0)

This example creates Decimal with sign NaN - which is never supporsed to happen. Why is this so bad? Methods like toString need to check for such edge cases - check if sign is NaN, check if both layer and mag are 0 before you can assume the Decimal is truly 0. This all slows down the library.
I can't think of good solution for this - currently there probably isn't one. Once private class fields get added it might be good idea to make methods that create non-normalized Decimals private to the class.

Old browser(s) support

Currently break_eternity code (except the typescript version) gets transpiled to ES5 (ES2009) compatible syntax, which I assume is done to keep backwards compatibility. But as far as I know, the only browser that dosen't support ES6 (ES2015) is Internet Explorer (and there was an official statement about the fact that it's no longer gonna be supported IIRC).
So my question is: do we really need to transpile the code to ES5 compatible syntax?

Infinite height tetration leading to wrong fixed point for a range of value.

Decimal.tetrate calculates infinite height tetrations using Lambert's W. However, for some reason that I don't really know, for bases between ~1.0703 to e^(1/e), it will cause it to go to unexpectedly high values. It is in fact a fixed point for the exponential function, however, it is not the same as taking the limit as the height goes to infinity.

Decimal.tetrate(1.0703,Infinity)+""
"60.35084336706522"
Decimal.tetrate(1.0703,100000)+""
"1.0758280726003573"

I need some help

Hi, I'm developing a game with break_eternity.js (so I can make stupidly big numbers), but I can't figure out how to use the suffixes (thing after the variable.)and the decimal class/variable type, I TRY to understand (my english isn't the greatest), Here is a piece of the code I'm using:
image
Would you tell me what I did wrong, please?
Edit: Here is the functions using .add (1) and .div, .pow.mult, and .add again (2):
image
image
Edit 2: upgOneLvl and upgOneCost are decimal class/variables

What comes next? - Representing larger numbers than break_eternity.js in a meaningful way

had a break_reality.js idea:
basically the format builds off of break_eternity.js.
we go from (sign, layer, mag) to (sign, arrows, layer, mag).
when arrows is 1, behaviour is the same as break_eternity.js. (and I don't think arrows of 0 or lower is supported, and non-integer arrows/layer definitely still isn't)
at some point while climbing break_eternity.js represnetation (maybe at layer 10, 9e15 or 10e10), arrows goes from 1 to 2 and we normalize. normalization should be similar to break_eternity.js, in that you can always tell if one number is larger than the other quickly by comparing its numbers in order.
internal representation is sign*(10(^){arrows times}){layer times}mag.
for example, layer 5 and arrows 1 is sign*10^10^10^10^10^mag.
and layer 5 and arrows 2 is sign*10^^10^^10^^10^^10^^mag.
I know e.g. that 10^10^10^10^10 is the same as 10^^5. But I'm not sure how much it changes by if the mag is less than or greater than 10 (would need to figure out some identities)
This would let you get all the way up to 10^{1.8e308 arrows}10 (or f_1.8e308(n) in the fast growing hierarchy, or 3 -> 3 -> 1.8e308 in chained arrow notation. I think it doesn't even come close to 3 -> 3 -> 3 -> 2 ~= f_w(f_w(27)). As with previous libraries, just stuffing the the library in in its own fields doesn't get you that much more new ground, you'd have to decide on a new more compact larger number representation instead.)
Hard part is, of course, the usual - normalizing and comparison, being able to prove that the system is mathematically sound, being able to provide good enough approximations of higher-than-exponentiation operators especially for real base and exponent, and finding meaningful functions to call on such massive numbers.
It sounds fun, but I don't know if I'll sit down and do it. I'm also not a googology expert, so I'm not sure if there's a better internal representation to use here (like the fast growing hierarchy for example).
And I DEFINITELY don't know what comes 'after' this.

Other notes:

In break_eternity.js, it's smooth and continuous (every slightly bigger value is representable, there are no 'gaps'). It would be nice if break_reality.js had the same property - that is, once arrows is at 2 or higher, the combination of layers and mag smoothly represents every value between the next value of arrows. I haven't put any thought into if this is true, or if it's not how hard it'd be to hack in. What I suspect is that arrows/layer will be smooth, but at arrows >= 2 layer/mag is no longer smooth unless we make mag start meaning a different thing. (Like, optimally it'd be able to represent the fractional part of layer from .00 to .99, right? Like in break_eternity.js range. But the thing is that at arrows >= 2, the mag is by far an extremely unimportant part of the calculation compared to the layers and especially compared to the arrows. So maybe we just ignore mag after that point (except for whether it's positive or negative?) and have layers be able to vary fractionally?)

implement hyperfactorial/K-function and superfactorial/G-function

http://mrob.com/pub/math/hyper4.html#real-hyper4

"There are three other functions that have been extended to the reals in ways that seem promising: the factorials by the Gamma function, hyperfactorials by the K-function, and the lower (Simon and Plouffe 1995) superfactorial by Barnes' G-function."

http://mrob.com/pub/math/numbers-9.html#hyperfactorial

The hyperfactorials are: 1, 4, 108, 27648, 86400000, 4031078400000, 3319766398771200000, 55696437941726556979200000, ... (Sloane's A2109). The hyperfactorials can be extended to the real numbers, the result is the K-function, which is related to Barnes' G-function, the Gamma function and the Riemann Zeta function. n hyperfactorial is equivalent to K(n+1).

Hyperfactorials: Product_{k = 1..n} k^k.

http://mrob.com/pub/math/numbers-11.html#superf1

288 = 4!×3!×2!×1! = 44+33+22+11

(4 superfactorial by the Sloane-Plouffe definition)

This is the value of "4 superfactorial" by the lower (Sloane and Plouffe 1995) definition of "superfactorial": 4!×3!×2!×1! = 24×6×2 = 288. By a rather nifty coincidence, it is also equal to 44+33+22+11 = 256+27+8+1. See also 34560, 5056584744960000, and 2.703176857×10^6940.

Barnes' G-function Barnes' G-function is to superfactorials as the

Gamma function is to normal factorials. Barnes' G-function can be (very slowly) calculated by the formula:

G(z) = 2πz/2 e-[z(z+1)+γ z2]/2 PRODUCT(n=1..inf)[(1+z/n)n e-z+z2/(2n)]

where γ is the Euler-Mascheroni constant. For sufficiently large values of z you can use the approximation:

G(n) ≈ (e1/12/A) nn2/2-1/12(2π)n/2e-3n2/4

where A is the Glaisher-Kinkelin constant. See the MathWorld page 86 for more.

another page with definitions of both:

http://mrob.com/pub/math/largenum-3.html

Add a LICENSE file

Dozens of free, unmonetized games already use b_e, but technically this library has no usage license and Patashu could go after those games with C&Ds or worse. Since b_i includes an MIT LICENSE file, I assume this library was intended to also be under the MIT license, but currently its legally not.

weird edge cases in layeradd10 for numbers below 1

I found a bug in what I presume is layeradd10:

Decimal.layeradd10(0, 0).slog().toString()
'-1'
Decimal.layeradd10(0, 1).slog().toString()
'0'
Decimal.layeradd10(1e-10, 0).slog().toString()
'-0.9999999998641643'
Decimal.layeradd10(1e-10, 1).slog().toString()
'1.3583567604058544e-10'
Decimal.layeradd10(1e-100, 0).slog().toString()
'-1'
Decimal.layeradd10(1e-100, 1).slog().toString()
'-1'

We seem to lose the ability to add 1 layer to these very smol numbers.

Well... It kind of works but we don't normalize it lol:

new Decimal(1e-100).layeradd10(0)
Decimal {sign: 1, mag: -100, layer: 1}
new Decimal(1e-100).layeradd10(1)
Decimal {sign: 1, mag: -100, layer: 2}

used to be broken too

new Decimal(1e-15).layeradd10(0)
Decimal {sign: 1, mag: 1e-15, layer: 0}
new Decimal(1e-15).layeradd10(1)
Decimal {sign: 1, mag: 1.0000000000000022, layer: 0}
new Decimal(1e-16).layeradd10(0)
Decimal {sign: 1, mag: -16, layer: 1}
new Decimal(1e-16).layeradd10(1)
Decimal {sign: 1, mag: -16, layer: 2}

breaks around we cross over to 'negative layers', so we need a special case to handle it - e.g. we do Math.pow(10, -16) and remove a layer

This also doesn't round-trip for anything below 1, but maybe it shouldn't? This also used to be broken

new Decimal(1).layeradd10(-1).layeradd10(1)
Decimal {sign: 1, mag: 1, layer: 0}
new Decimal(0.5).layeradd10(-1).layeradd10(1)
Decimal {sign: -1, mag: 2, layer: 0}
new Decimal(1e-10).layeradd10(-1).layeradd10(1)
Decimal {sign: -1, mag: 10000000000, layer: 0}

change the critical section for tetration/inverses from linear approximation to something more analytical (mostly done!)

two things to do:

  1. handle negative heights. https://en.wikipedia.org/wiki/Tetration#Linear_approximation_for_real_heights has a test case to try.

  2. I also haven't tested fractional heights > 1 yet, because I didn't have a worked example for it. BUT, I think it is okay, because http://mrob.com/pub/math/hyper4.html#real-hyper4 talks about it being defined in the same way I do (add one to the height and pow payload to the fraction). But MROB then goes on to define it in a different way that has higher accuracy for 'small' values (greater than 1, less than 1e1e308) which we could use for such cases if it's easy to implement (but it looks not easy lol)

(we could also try the quadratic approximation. https://en.wikipedia.org/wiki/Tetration#Higher_order_approximations_for_real_heights I'm not sure if it actually gives better accuracy or just makes it differentiable, though. And now that I see MROB's version above, I might actually just use that and skip even trying this?)

Tetration & super-logarithm not working as expected

It would appear to me that the "tetrate" and "slog" functions are not working as expected.

new Decimal(10).tetrate(1.5) returns 300.2723103062356. I was expecting 1453.0403018990435 (10^10^0.5), but this in itself is not a major issue.

The problem is here:
new Decimal(1e9).slog(10) returns 1.9714465989625964, but new Decimal(10).tetrate(1.9714465989625964) returns 1380175625.5753584.

How can that be right?

Mutability and returning existing Decimals from methods

tl;dr: Some methods can return this, an argument or one of the static Decimal.dConst constants. If users mutate those, they might see unintended consequences. WAI or bug? If it's a bug, dealing with it in a performant way is complicated.


This is a contrived function, but bear with me.

function f(a: Decimal, b: Decimal): Decimal {
  // Calculates 11a + 11b in a weird way.
  const sum = a.add(b);
  sum.exponent++;
  // Sum is now 10(a+b) = 10a + 10b.
  return sum.add(a).add(b);
}

This function works as expected... until b is 0. Then the first .add will return a as-is, setting sum to be a. The exponent increment multiplies a by 10, and the final return is 20a. This also mutates the value of a which is probably unexpected to anyone calling this function. The same applies if a is 0, as the first .add will return b in that case.

A similar thing happens with any function that returns this, an argument, or one of the static Decimal.dConst constants. If users mutate those, they might see unintended consequences - especially the constants, as they are assumed to be immutable and widely used in internal break_eternity functions.

Is this working as intended, or is this a bug in break_eternity?

If this is a bug in break_eternity, then we'll need to create copies (and therefore new allocations) of all return values if they could be one of the aforementioned "should not be mutated" values...

...but copies would presumably slow down code - even code which doesn't mutate Decimals...
...so a user-controlled switch could be added to say "I swear I will never mutate a Decimal"...

...but then internal break_eternity code like d_lambertw, which doesn't mutate decimals, will be slowed down if that is false...
...unless internal code has an override for the above switch which always sets it to false while inside of an internal break_eternity function...
...which will cause problems if the function throws an error, as the user's original choice wouldn't be restored...
...but that could be caught using a try/except...
...but try/except would presumably slow down code.

I haven't benchmarked the exact slowdown of (creating copies of Decimals all the time) vs. (giving users the option to not create copies) vs (always setting that to "don't copy" inside internal functions, with a try/except statement to restore the user's choice). Each additional performance mitigation introduces more and more complicated code, which is harder to maintain.

Perhaps it's worth it to enforce immutability by deprecating the .fromX methods and properties, instead of making copies selectively?

add mantissaExponent and variants for backwards compatibility

should be very easy, just make sure to handle negative exponents correctly.

slightly harder would be to make mantissa/exponent setters that temporarily express the number in mantissa/exponent form, change the m/e and then recalculate mag (and throw an error if it's too large to be a meaningful operation)

ssqrt gives 'iteration failed to converge' range for certain range of numbers

new Decimal.fromComponents(1,2,1000000000).ssqrt().toString()
break_eternity.html:2469 Uncaught Error: Iteration failed to converge: 2.3025848250427177e1000000000
at d_lambertw (break_eternity.html:2469)
at Decimal.lambertw (break_eternity.html:2417)
at Decimal.ssqrt (break_eternity.html:2482)
at :1:44
d_lambertw @ break_eternity.html:2469
Decimal.lambertw @ break_eternity.html:2417
Decimal.ssqrt @ break_eternity.html:2482
(anonymous) @ VM1147:1
new Decimal.fromComponents(1,2,1000).ssqrt().toString()
break_eternity.html:2469 Uncaught Error: Iteration failed to converge: 2.3025850929942786e1000
at d_lambertw (break_eternity.html:2469)
at Decimal.lambertw (break_eternity.html:2417)
at Decimal.ssqrt (break_eternity.html:2482)
at :1:38
d_lambertw @ break_eternity.html:2469
Decimal.lambertw @ break_eternity.html:2417
Decimal.ssqrt @ break_eternity.html:2482
(anonymous) @ VM1167:1

new operator: iteratedlog(payload, base, height)

if height is integer, apply logb to payload height times, taking the appropriate 'just reduce layer by X' shortcut when payload starts out much larger than base.

if height is real, then apply logb the fractional amount of a time, either at the start or the end, whatever makes for the most realistic operator.

Loading decimals from localStorage

Is there a function which can check if new Decimal(x) is a valid number? I'd like to be able to automate loading from localStorage without having to make a list of around 100 variables that need to be converted into a Decimal.

Examples:
Decimal.valid("3.141592653589793" ) = true
Decimal.valid("(e^8)15") = true
Decimal.valid(true) = false
Decimal.valid([1,1,2,3,5,8]) = false
Decimal.valid("Mixed scientific") = false

`Decimal.exp(-1000).eq(Decimal.exp(1000))` is true

They're both high positive numbers (I think they're both what Decimal.exp(1000)) should be). Decimal.exp(-100).eq(Decimal.exp(100)) is false, which makes me think this only happens over infinity. I think this might be at least partially the cause of #32 .

Subtract breaks with layer(s)

new Decimal("ee17").minus(new Decimal("ee18")).toString() returns "ee17" - incorrect
new Decimal("ee17").minus(new Decimal("eeee17")).toString() returns "ee17" - incorrect
new Decimal("ee15").minus(new Decimal("ee16")).toString() returns "1e1000000000000000" - incorrect
new Decimal("e15").minus(new Decimal("e16")).toString() returns "-9.000000000000007e16" - correct

Allow negative mag in layer >= 1? (would be super nice but looks annoying)

This would alleviate dan-simon's concern that we can't represent very negative magnitude numbers like 1e-400, allowing for it to be a true sequel to break_infinity.js (which can).

Things we'd have to do:

  1. change normalize to check abs(mag) instead of mag, so that negative mags are promoted and demoted at same timing as positive mags. In addition, whenever we log10 mag, we have to do more like sign(mag)*log10abs(mag).

  2. Everywhere else we log10 mag also has to use the new function.

  3. cmp and cmpabs need to be changed. Something like - if mag is negative, negate layer before comparing. For example:

1, 1, -400

1, 1, -500

Negate layer. A positive -1 layer number is smaller than any layer 0 number (true) and bigger than any -2 layer number (true).

1e-400 is bigger than 1e-500 and -400 > -500, so we return 1.

(Also while I'm doing this, check if cmp for negative numbers even works right, because it's probably broken...)

  1. recip needs to be changed - just negate the mag. However, make sure to correctly handle mag === 5e-324 and similar cases.

find operators that are stronger than pow but weaker than pentation (then implement them)

pow and tetration are 'boring' for reaching higher layers in the sense that they just count up layers one at a time past a certain point - everything devolves into min, max, cmp, succ and add operations. but pentation is way too strong, lacks research into non-integer arguments and explodes way too rapidly for even tiny integers.

what would be some kind of in-between operator that e.g. a crazy incremental game could use to surge up the layers to 1^^1.8e308?

real numbered hyper operators > 3 and < 5? (0 is successor, 1 is add, 2 is mul, 3 is pow, 4 is tetrate, 5 is pentate.)

functions arbitrarily crafted to have approximately hyper operator >= 0 <= 3 strength behaviour on the layers of their arguments, with magnitudes interpreted as partial layers?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.