hipparchus-math / hipparchus Goto Github PK
View Code? Open in Web Editor NEWAn efficient, general-purpose mathematics components library in the Java programming language
License: Apache License 2.0
An efficient, general-purpose mathematics components library in the Java programming language
License: Apache License 2.0
There is on Hipparchus an interface for univariate real function for any field type (FieldUnivariateFunction), it could be interesting to add an interface for bivariate real function for any field type (FieldBivariateFunction).
Bryan
The class PolynomialFunction implements the interface UnivariateDifferentiableFunction, it can be interesting if this class implements also the interface RealFieldUnivariateFunction
The same remark can be done also with PolynomialFunctionNewtonForm and PolynomialSplineFunction
Thanks
Adapt the code from jenetics to hipparchus to make the PSquarePercentile implementation also aggregatable.
SphericalPolygonsSet instances can be built from a list of vertices, which are points on the 2D unit sphere. When these vertices form a zigzag or star shaped boundary and two distant edges happen
to be on the same circle (according to the hyperplaneThickness parameter setting) and these edges
are in opposite orientation, then the polygon built is completely wrong.
This test case is an example of this behavior. If the hyperplane thickness (first constructor parameter) is set to 1.0e-10, then all edges are considered to belong to separate circles and the zone is properly built.
If the hyperplane thickness is set to 1.0e-6, then the edge built from vertices at indices 6 and 7 (counting from 0) and the edge built from vertices at indices 10 and 11 are considered to belong to the same circle and the polygons built is different.
@Test
public void testZigZagBoundary() {
SphericalPolygonsSet zone = new SphericalPolygonsSet(1.0e-6,
new S2Point(-0.12630940610562444, 0.8998192093789258),
new S2Point(-0.12731320182988207, 0.8963735568774486),
new S2Point(-0.1351107624622557, 0.8978258663483273),
new S2Point(-0.13545331405131725, 0.8966781238246179),
new S2Point(-0.14324883017454967, 0.8981309629283796),
new S2Point(-0.14359875625524995, 0.896983965573036),
new S2Point(-0.14749650541159384, 0.8977109994666864),
new S2Point(-0.14785037758231825, 0.8965644005442432),
new S2Point(-0.15369807257448784, 0.8976550608135502),
new S2Point(-0.1526225554339386, 0.9010934265410458),
new S2Point(-0.14679028466684121, 0.9000043396997698),
new S2Point(-0.14643807494172612, 0.9011511073761742),
new S2Point(-0.1386609051963748, 0.8996991539048602),
new S2Point(-0.13831601655974668, 0.9008466623902937),
new S2Point(-0.1305365419828323, 0.8993961857946309),
new S2Point(-0.1301989630405964, 0.9005444294061787));
Assert.assertEquals(Region.Location.INSIDE, zone.checkPoint(new S2Point(-0.145, 0.898)));
Assert.assertEquals(6.463e-5, zone.getSize(), 1.0e-7);
Assert.assertEquals(5.487e-2, zone.getBoundarySize(), 1.0e-4);
}
From the CDF definition at:
should line 100
be
ret = 1.0 - FastMath.exp(-x * mean);
and line 119
be
ret = -1/mean * FastMath.log(1.0 - p);
?
This test case fails. The 3D shape is a cube with square holes drilled along each axis.
The outline extracted when looking along the Z axis is empty, whereas it should be a big square
with a square hole in the middle.
@Test
public void testHolesInFacet() {
double tolerance = 1.0e-10;
PolyhedronsSet cube = new PolyhedronsSet(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0, tolerance);
PolyhedronsSet tubeAlongX = new PolyhedronsSet(-2.0, 2.0, -0.5, 0.5, -0.5, 0.5, tolerance);
PolyhedronsSet tubeAlongY = new PolyhedronsSet(-0.5, 0.5, -2.0, 2.0, -0.5, 0.5, tolerance);
PolyhedronsSet tubeAlongZ = new PolyhedronsSet(-0.5, 0.5, -0.5, 0.5, -2.0, 2.0, tolerance);
RegionFactory<Euclidean3D> factory = new RegionFactory<>();
PolyhedronsSet cubeWithHoles = (PolyhedronsSet) factory.difference(cube,
factory.union(tubeAlongX,
factory.union(tubeAlongY, tubeAlongZ)));
Assert.assertEquals(4.0, cubeWithHoles.getSize(), 1.0e-10);
Vector2D[][] outline = new OutlineExtractor(Vector3D.PLUS_I, Vector3D.PLUS_J).getOutline(cubeWithHoles);
Assert.assertEquals(2, outline.length);
Assert.assertEquals(4, outline[0].length);
Assert.assertEquals(4, outline[1].length);
}
The EnumeratedRealDistribution and EnumeratedIntegerDistribution constructors that take parallel arrays of values and masses do not verify that the masses sum to 1. It is possible to create a "distribution" that is not a probability distribution. The probability arrays should be normalized to sum to 1 and a check should be added to ensure that at least one entry is positive.
After a detection of a discontinuous event the RESET_DERIVATIVES action is triggered. However the resetOccurred flag in the AbstractIntegrator.class is not reset to false after the reset has been handled and spurious restarts occur between events. Please see: https://forum.orekit.org/t/adamsbashforthintegrator-propagation-with-srp/400/2
A temporary fixed suggested by Luc Maisonobe was to add the flag resetOccurred = false; before the boolean doneWithStep = false; in AbstractIntegrator.
It is possible to generate a MathIllegalStateException when using very small SphericalPolygonsSet instances. See code below to reproduce.
S2Point[] s2pA = new S2Point[]{
new S2Point(new Vector3D(0.1504230736114679, -0.6603084987333554, 0.7357754993377947)),
new S2Point(new Vector3D(0.15011191112224423, -0.6603400871954631, 0.7358106980616113)),
new S2Point(new Vector3D(0.15008035620222715, -0.6605195692153062, 0.7356560238085725)),
new S2Point(new Vector3D(0.1503914563063968, -0.6604879854490165, 0.7356208472763267))
};
final SphericalPolygonsSet spsA = new SphericalPolygonsSet(1E-100, s2pA);
spsA.getSize();
Currently we see error messages like "Interval does not bracket a root: f(1.0) = -0.0, f(1.0) = -0.0" which is misleading because the numbers are rounded instead of using their full precision. Since these exception messages are primarily for developers the full precision should be used.
See also: https://forum.orekit.org/t/workaround-to-interval-does-not-bracket-a-root-f-x-0-f-y-0-7/487
Kotlin has a lot of slick features, like eliminating the need for builders, etc. through named arguments, that I think would be useful for Hipparchus. Also you can compile to both Javascript and various versions of JVMs. See:
https://stackoverflow.com/questions/46858270/does-there-exist-a-babel-like-compiler-for-java
https://stackoverflow.com/questions/46892929/are-number-operations-using-kotlin-as-fast-as-the-equivalent-with-java-primitive
General info:
https://medium.com/@magnus.chatt/why-you-should-totally-switch-to-kotlin-c7bbde9e10d5
The val property (Non modifiable properties) will be really helpful in thread safe designs ...
Could compile to Javascript / Typescript and publish on NPM - which should bring more contributors into the fold.
The following code generates an internal error:
final WelzlEncloser<Euclidean3D, Vector3D> encloser =
new WelzlEncloser<Euclidean3D, Vector3D>(1e-14, new SphereGenerator());
List<Vector3D> points = new ArrayList<Vector3D>();
points.add(new Vector3D(0.9999999731, 0.000200015, 0.0001174338));
points.add(new Vector3D(0.9987716667, 0.0350821284, 0.0349914572));
points.add(new Vector3D(0.9987856181, -0.0346743952, 0.0349996489));
points.add(new Vector3D(0.9987938115, -0.0346825853, -0.0347568755));
points.add(new Vector3D(0.9987798601, 0.0350739383, -0.0347650673));
EnclosingBall<Euclidean3D, Vector3D> enclosing3D = encloser.enclose(points);
The UnivariateSolverUtils class provides helper functions for bracketing regular UnivariateFunction
prior to find a root, but there are no such functions for RealFieldUnivariateFunction.
This was reported as (Apache Commons) MATH-1381 . When the number of successes is 0 or equal to the number of trials, in some cases the 2-sided test algorithm will double-count the probability of the extreme value, causing the returned p-value to be inflated.
This issue is the Hipparchus counterpart of Orekit issue https://gitlab.orekit.org/orekit/orekit/issues/485.
In order to analyze filter performance and perform smoothing, access to intermediate matrices
in the Kalman filters are required. The needed matrices are:
Any storeless univariate statistic should extend from DoubleConsumer. The accept(double)
method should be delegated to increment(double)
.
The javadoc references for the Gumbel distribution CDF, PDF seem to be different:
Wolfram defines the CDF different from implementation:
final double z = (x - mu) / beta;
return 1.0 - FastMath.exp(-FastMath.exp(z));
Can you clarify which definitions are used in the GumbelDistribution class?
The way SphericalPolygonSet
(and presumably PolygonSet
) implement the checkPoint(...)
method can lead to points arbitrarily far away from the center-line of the boundary being considered part of the boundary. This means that points inside a region and far a way from the center-line of the boundary may be considered to be part of the boundary. Similarly points outside the region and far away from it may be considered to be part of the boundary. Here "far" means the tolerance multiplied by some large number.
The article in 1 provides a good description of the issue. Hipparchus is currently using a mitre (at left) which leads to very long points. Using a round (middle) or bevel (right) would fix the issue. I think a round join is the most intuitive meaning for tolerance.
I don't know if this is worth fixing or if this is merely a theoretical problem. Maps (one of the use cases for SphericalPolygonSet) tend to have some very strange boundaries.
I've used the code below with a tolerance of 1e-3
to produce the "Hipparchus" points in the plot below. As you can see, even though a point is several orders of magnitude further away from the center-line of the boundary it can still be considered part of the boundary.
double tol = 0.001;
int n = 100;
double step = FastMath.PI / n;
for (int i = 0; i < n; i++) {
double angle = FastMath.PI - i * step;
RegionFactory<Sphere2D> factory = new RegionFactory<>();
SphericalPolygonsSet plusX = new SphericalPolygonsSet(Vector3D.PLUS_I, tol);
SphericalPolygonsSet plusY = new SphericalPolygonsSet(Vector3D.PLUS_J, tol);
SphericalPolygonsSet plusZ = new SphericalPolygonsSet(new Vector3D(0, -FastMath.cos(angle), FastMath.sin(angle)), tol);
SphericalPolygonsSet octant =
(SphericalPolygonsSet) factory.intersection(factory.intersection(plusX, plusY), plusZ);
Circle bisect = new Circle(new Vector3D(0, -FastMath.cos(angle / 2), FastMath.sin(angle / 2)), tol);
final double phase0 = bisect.getPhase(Vector3D.PLUS_I);
final double boundary = UnivariateSolverUtils.solve(
x -> octant.checkPoint(new S2Point(bisect.getPointAt(x))) == Location.OUTSIDE ? 1 : -1,
phase0 - FastMath.PI / 2,
phase0);
final double offset = MathUtils.normalizeAngle(boundary, phase0) - phase0;
out.write(String.format("%20f %20f\n", angle, offset));
We have been discussing about the necessity of a KalmanFilter in the Hipparchus Luc, I, Maxime).
One alternative, the proposal of this issue, is to create a new submodule called "hipparchus-filtering" that will contain the code from the last version of Apache Commons Math (http://commons.apache.org/proper/commons-math/javadocs/api-3.6/org/apache/commons/math3/filter/package-summary.html) ported to Hipparchus.
A small enhancement will be to change the KalmanFilter implementation from apache because it uses a very strict DEFAULT_RELATIVE_SYMMETRY_THRESHOLD in the "correct" method (namely, in the CholeskyDecomposition). Therefore, it would be great if the KalmanFilter receives, in an additional constructor, the relative symmetry threshold for the CholeskyDecomposition.
The goal of this issue is to allow a centralized discussion between the contributors.
New methods have been added to Math/StrictMath with Java 9. As some unit tests in Hipparchus check that FastMath is always a drop-in replacement for Math/StrictMath and uses introspection for this purpose, these tests fails when the JVM used is based on Java 9.
In some cases, additional equations can require to change the derivatives of the primary state.
One use case is optimal control, when the secondary equations handle co-state,
which changes control, and the control changes the primary state. In this
case, the primary and secondary equations are not really independent from each
other, so if possible it would be better to put state and co-state and their
equations all in the primary equations. However, this is not always possible, so
it would be better to explicitly allow secondary equations to have this side effect.
In fact, despite not being advertised, this was possible with Apache Commons Math
3.x and this feature was inadvertently removed in Hipparchus, as a side effect of
cleaning up the API.
The nextDeviate
methods in RandomDataGenerator
should provide better implementations than the generic inversion-based methods for enumerated real and integer distributions.
This is related to axkr/symja_android_library#60
This snippet returns 4.999999701976776
for the BisectionSolver
public void testBisection() {
BisectionSolver solver = new BisectionSolver();
System.out.println(solver.solve(100, x->Math.cos(x)+2, 0.0, 5.0));
}
The other solvers throw an exception, if there's no solution.
Performance improvement for Array2DRowRealMatrix for methods getRow() and setRow() has just been applied for Commons Math. Maybe you are also interested:
https://issues.apache.org/jira/browse/MATH-1425
https://issues.apache.org/jira/secure/attachment/12878347/patch
The implementation in WilcoxonSignedRankTest does not handle tied pairs appropriately and the continuity correction applied when computing the normal approximation is incorrect.
Handling of ties should ideally be configurable (see e.g. scipy.stats.wilcoxon). Minimally, the implementation should document and correctly implement a strategy for handling tied pairs.
This issue was originally reported as MATH-1233.
As a user I would expect that sorting the array and taking the k-th element yields the same result as using the KthSelector. Unfortunately, this is not true if the array contains NaN or -0/+0 values, because the implementation uses < and > operators for comparison instead of Double.compare.
Can you show an example/outline, how to implement a DerivativeStructure
based on symbolic derivation?
At the moment I've used my own NewtonSolver in FindRoot, but it would be nice to have a general solution which I can use with hipparchus.
In the page:
https://www.hipparchus.org/hipparchus-core/analysis.html#Interpolation
The following two links are broken:
https://www.hipparchus.org/apidocs/org/hipparchus/analysis/interpolation/BicubicSplineInterpolator.html
https://www.hipparchus.org/apidocs/org/hipparchus/analysis/interpolation/BicubicSplineInterpolatingFunction.html
The API Documentation:
https://hipparchus.org/apidocs/org/hipparchus/analysis/interpolation/UnivariateInterpolator.html
does not show Bicubic.
Their is also a mention of "Microsphere" interpolator but no links to the class/implementation API.
HTHS
Assuming this affects hipparchus as well:
https://issues.apache.org/jira/browse/MATH-1373
Seems to happen randomly in rare cases. Working on a test case.
The FastMath.sinCos method has been added in version 1.3 to speed up computation
where both sine and cosine are required for the same angle. This is particularly true
for derivatives.
A Field equivalent would be welcome.
It is possible to create an invalid Sphere2D BSPTree
containing a node with a null cut
and attribute
using the RegionFactory union
method.
I have been able to trace this down to the SubCircle split
method. The returned SplitSubHyperplane
contains one side of the split as null but the BSPTree split
method assumes that both sides will return non-null values.
The code below can be used to reproduce this issue. Note that a lowering the provided tolerance value will "solve" the issue but this is an edge case that should still be addressed.
RegionFactory<Sphere2D> regionFactory = new RegionFactory<>();
S2Point[] s2pA = new S2Point[]{
new S2Point(new Vector3D(0.2122954606, -0.629606302, 0.7473463333)),
new S2Point(new Vector3D(0.2120220248, -0.6296445493, 0.747391733)),
new S2Point(new Vector3D(0.2119838016, -0.6298173178, 0.7472569934)),
new S2Point(new Vector3D(0.2122571927, -0.6297790738, 0.7472116182))};
S2Point[] s2pB = new S2Point[]{
new S2Point(new Vector3D(0.2120291561, -0.629952069, 0.7471305292)),
new S2Point(new Vector3D(0.2123026002, -0.6299138005, 0.7470851423)),
new S2Point(new Vector3D(0.2123408927, -0.6297410403, 0.7472198923)),
new S2Point(new Vector3D(0.2120674039, -0.6297793122, 0.7472653037))};
final SphericalPolygonsSet spsA = new SphericalPolygonsSet(0.0001, s2pA);
final SphericalPolygonsSet spsB = new SphericalPolygonsSet(0.0001, s2pB);
SphericalPolygonsSet invalidSPS = (SphericalPolygonsSet) regionFactory.union(spsA, spsB);
//Causes a NullPointerException
System.out.println(invalidSPS.getSize());
Use case:
hyperplaneThickness
to control performance/fidelity tradeoffBelow is a failing test case adapted from the ZigZag test case. The ZigZag region is enlarged because this issue does not occur for very small regions. The resulting region is ~ 30 degrees across. The ZigZag boundary is then sub-sampled to a tenth of the tolerance. The sub sampled data is what would be read in from an external file in the above use case. The sub-sampled data is used to create a new SPS, which is tested. On my machine a NPE is generated by the call to getEnclosingCap()
.
I've also included a plot of the region. Blue dots are the sub-sampled points, black lines and dots are the computed boundary. It matches closely in some locations, others are very far off.
@maisonobe any help or pointers would be appreciated. I'm still trying to get my head around the BSPTree and partitioning code.
@Test
public void testZigZagBoundary() {
final double tol = 1.0e-4;
// sample region, non-convex, not too big, not too small
final S2Point[] vertices = {
new S2Point(-0.12630940610562444e1, (0.8998192093789258 - 0.89) * 100),
new S2Point(-0.12731320182988207e1, (0.8963735568774486 - 0.89) * 100),
new S2Point(-0.1351107624622557e1, (0.8978258663483273 - 0.89) * 100),
new S2Point(-0.13545331405131725e1, (0.8966781238246179 - 0.89) * 100),
new S2Point(-0.14324883017454967e1, (0.8981309629283796 - 0.89) * 100),
new S2Point(-0.14359875625524995e1, (0.896983965573036 - 0.89) * 100),
new S2Point(-0.14749650541159384e1, (0.8977109994666864 - 0.89) * 100),
new S2Point(-0.14785037758231825e1, (0.8965644005442432 - 0.89) * 100),
new S2Point(-0.15369807257448784e1, (0.8976550608135502 - 0.89) * 100),
new S2Point(-0.1526225554339386e1, (0.9010934265410458 - 0.89) * 100),
new S2Point(-0.14679028466684121e1, (0.9000043396997698 - 0.89) * 100),
new S2Point(-0.14643807494172612e1, (0.9011511073761742 - 0.89) * 100),
new S2Point(-0.1386609051963748e1, (0.8996991539048602 - 0.89) * 100),
new S2Point(-0.13831601655974668e1, (0.9008466623902937 - 0.89) * 100),
new S2Point(-0.1305365419828323e1, (0.8993961857946309 - 0.89) * 100),
new S2Point(-0.1301989630405964e1, (0.9005444294061787 - 0.89) * 100)};
SphericalPolygonsSet zone = new SphericalPolygonsSet(tol, vertices);
// sample high resolution boundary
List<S2Point> points = new ArrayList<>();
final Vertex start = zone.getBoundaryLoops().get(0);
Vertex v = start;
double step = tol / 10;
do {
Edge outgoing = v.getOutgoing();
final double length = outgoing.getLength();
int n = (int) (length / step);
for (int i = 0; i < n; i++) {
points.add(new S2Point(outgoing.getPointAt(i*step)));
}
v = outgoing.getEnd();
} while (v != start);
// create zone from high resolution boundary
zone = new SphericalPolygonsSet(tol, points.toArray(new S2Point[0]));
//print(zone);
EnclosingBall<Sphere2D, S2Point> cap = zone.getEnclosingCap();
// check cap size is reasonable. The region is ~0.5 accross, could be < 0.25
Assert.assertTrue(cap.getRadius() < 0.5);
for (S2Point vertex : vertices) {
// check original points are on the boundary
Assert.assertEquals("" + vertex, Location.BOUNDARY, zone.checkPoint(vertex));
// check original points are within the cap
Assert.assertTrue("" + vertex, cap.contains(vertex));
}
}
There is already a bicubic interpolation implementation of BivariateGridInterpolator
.
A simpler bilinear implementation would be nice.
The first problem (incorrect U statistic) was reported as MATH-1453. What is returned by MannWhitneyU is actually the Wilcoxon Signed Rank statistic (the maximum of the U+ and U-) What is used in the test is the correct statistic. The p-values returned by the test suffer from three accuracy-related problems:
The PolynomialFunction should have integrate
and antiDerivative
methods.
There is currently no way to recover the probability mass function in EnumeratedRealDistribution or EnumeratedIntegerDistribution. These classes should expose a getPmf method like EnumeratedDistribution.
The validateSampleData
method in AbstractLinearRegression
requires that the number of rows in the design matrix is at least one greater than the number of regressors. If the model does not include an intercept term, this check is too stringent: nobs == number of regressors should be allowed in this case.
This issue was surfaced by the StackOverflow question OLS Multiple Linear Regression with commons-math
The RANDOM algorithm makes a nice complement to PSquarePercentile for streaming percentiles. While it does not have a uniformly fixed bound on storage, storage is bounded for fixed quantile estimation error and grows very slowly with increases in precision. It also allows any quantile to be estimated based on the data it stores, so e.g. getResult(quantile) or even getResult(quantile[]) methods are possible. Finally, aggregation is straightforward.
When the result is on a bin boundary and the default Gaussian smoothing kernel is specified, EmpiricalDistribution#inverseCumulativeProbability returns Double.POSITIVE_INFINITY.
This was initially reported against Apache Commons Math 3.6.1 as MATH-1462.
This was reported as https://issues.apache.org/jira/browse/MATH-1431 against Commons Math. The problem, as correctly diagnosed by the reporter, is that when a bin is empty, the getKernel method returns a Gaussian distribution with NaN parameters.
Hello all,
I am using EigenDecomposition class these days and sometimes the diagonalized matrix I get in return seems wrong.
A typical case is when I set this matrix :
{{23473.684554963584, 4273.093076392109},
{4273.093076392048, 4462.13956661408}}
I know the matrix is not perfectly symmetric (yet close to double precision) but I got this matrix from a AMtranspose(A) computation so I can not get a more symmetric matrix.
The algorithm then computes complex Eigen values matrix :
{{13967.9120607888, 10422.0456317615},
{-10422.0456317615, 13967.9120607888}}
I have checked with matlab and with a home-made 2*2 matrix diagonalizer and I get with both solutions the same real Eigen values :
24389.95769255035 and 3545.86642902732
which have nothing to see with the complex ones (in contradiction with the theoretical unicity of the Eigen values)
I don't know if the symmetricity test is just too strict or if there is something else wrong or that I didn't understand but I find it suspicious ;)
Thank you in advance for your clues,
All the best,
Quentin
When FieldODEIntegrator implementations create a FieldODEStateAndDerivative, they put all the components of the integrated state into the primary state and leave secondary state as null, instead of mapping components according to primary and secondary equations respective dimensions.
Hi the Hipparchus Team :)
A performance improvement to Array2DRowRealMatrix.getSubMatrix()
has been applied recently to Commons Math, you may be interested in applying it as well:
https://issues.apache.org/jira/browse/MATH-1389
apache/commons-math@72df12f
With this modification the performance is significantly better when the method hasn't been compiled by the JIT yet (or with non Hotspot JVMs). Once the JIT kicked in its roughly equivalent to the current implementation.
One of the constructors of the Illinois solver in fact creates a Pegasus solver.
When running mvn clean install
on Java 11 it fails when trying to run the tests because @{jacoco.agent.args}
is not replaced by anything. Applying the following patch seems to fix the build, but I don't know how it will affect other versions of java.
diff --git a/hipparchus-parent/pom.xml b/hipparchus-parent/pom.xml
index 0c6fec34e..cbcd23a5c 100644
--- a/hipparchus-parent/pom.xml
+++ b/hipparchus-parent/pom.xml
@@ -600,7 +600,7 @@
<excludes>
<exclude>**/*AbstractTest.java</exclude>
</excludes>
- <argLine>@{jacoco.agent.args} -Xmx1200m</argLine>
+ <argLine>-Xmx1200m</argLine>
</configuration>
</plugin>
<plugin>
In RungeKuttaFieldStateInterpolator.previousStateLinearCombination(...)
and currentStateLinearCombination(...)
the wrong state is used when the interpolator is restricted. I.e. the soft state is used instead of the global state. It seems that the tests that would have caught this error were not copied from ODEStateInterpolatorAbstractTest
.
Fixing and copying tests.
The following test triggers a NullPointerException:
@Test
public void testInfiniteQuadrant() {
final double tolerance = 1.0e-10;
BSPTree<Euclidean2D> bsp = new BSPTree<>();
bsp.insertCut(new Line(Vector2D.ZERO, 0.0, tolerance));
bsp.getPlus().setAttribute(Boolean.FALSE);
bsp.getMinus().insertCut(new Line(Vector2D.ZERO, 0.5 * FastMath.PI, tolerance));
bsp.getMinus().getPlus().setAttribute(Boolean.FALSE);
bsp.getMinus().getMinus().setAttribute(Boolean.TRUE);
PolygonsSet polygons = new PolygonsSet(bsp, tolerance);
Assert.assertEquals(Double.POSITIVE_INFINITY, polygons.getSize(), 1.0e-10);
}
Hi there,
I am trying to use the FieldOrdinaryDifferentialEquation with Complex type but it seems not possible to use it as the Complex type doesn't implements RealFieldElements.
I do not understand why there is such a limitation of the FODE to only accept RealFieldElement and not FieldElement more generic type. Could you tell me why?
I saw on Hipparchus that there is an interface, RealFieldUnivariateFunction, that allows to implement the method value(T x) for type T.
Is it possible to create also an interface to implement the method value for an array of T ? As it is already done for type double with the interface UnivariateVectorFunction
Thank, Bryan
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.