Giter Site home page Giter Site logo

java-lang's Introduction

Chronicle Overview

Chronicle Software provides libraries to help with fast data. The majority of our customers are in financial services. Our products include:

Chronicle FIX/ITCH Engine - Low latency FIX/ITCH engine in Java for all versions of FIX. Can parse and generate messages within 1 microsecond.

Chronicle Microservices Framework - Microservices built with Chronicle Services are efficient, easy to build, test, and maintain. Equally importantly they provide exceptional high-throughput, low latency, and transparent HA/DR.

Chronicle Matching Engine - forms the backbone for a resilient and scalable exchange solution. It provides order matching, validation, and risk checks with high capacity and low latency. It has a modular and flexible design which enables it to be used stand-alone, or seamlessly integrated with Chronicle FIX and Chronicle Services.

Chronicle EFX - built on Chronicle Microservices, EFX contains components for Aggregation, Pricing, Hedging, Position Keeping, P&L, Market Gateway and Algo containers. EFX allows the customer to use off the shelf functionality built and maintained by Chronicle, or to extend and customise with their own algos and IP - the best compromise of "buy vs build".

Chronicle Queue and also Chronicle Queue Enterprise - using Chronicle Queue for low latency message passing provides an effectively unlimited buffer between producers and consumers and a complete audit trail of every message sent. Queue Enterprise provides even lower latencies and additional delivery semantics - for example - only process a message once it is guaranteed replicated to another host(s).

Chronicle Map is a key-value store sharing persisted memory between processes, either on the same server or across networks. CM is designed to store the data off-heap, which means it minimizes the heap usage and garbage collection allowing the data to be stored with sub-microsecond latency. CM is structured key-value store able to support exceptionally high updates and high throughput data e.g. OPRA Market Data with minimum configuration. Replication is provided by Chronicle Map Enterprise

Contributor agreement

For us to accept contributions to open source libraries we require contributors to sign the below

Documentation in this repo

This repo contains the following docs

  • Java Version Support documents which versions of Java/JVM are supported by Chronicle libraries

  • Platform Support documents which Operating Systems are supported by Chronicle libraries

  • Version Support explains Chronicle’s version numbers and release timetable

  • Anatomy shows a graphical representation of the OpenHFT projects and their dependencies

  • Reducing Garbage contains tips and tricks to reduce garbage

java-lang's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

java-lang's Issues

ByteBufferBytes must set bb's order to native in constructor

Because compare-and-swap operations end up with native ops.

Breaking test:

    @Test
    public void testCAS() {
        Bytes bytes = new ByteBufferBytes(ByteBuffer.allocate(100));
        bytes.compareAndSwapLong(0, 0L, 1L);
        assertEquals(1L, bytes.readLong(0));
    }

Add generated method to get data off heap without memory allocation

Add a new method template to the code generator that populates and returns the CharSequence supplied as the first parameter. Up to maxLength chars would be copied to the first parameter.
Example:

 public interface AccountVO  {
      CharSequence getUsingAccount(CharSequence account, int maxLength);
}

This would avoid the memory allocation associated with using the current generated getXXX method.

Error with Toggle Example

Hi,
I tried to run LockingViaMMapWithThreadIdMain with one process invoking it with "false" and the other with "true". It complains,

Exception in thread "main" java.lang.IllegalStateException: Reentered 255 times without an unlock - if you are using this to lock across processes, there could be a thread id conflict letting one process 'steal' the lock from another process. To avoid this, call AffinitySupport.setThreadId() during startup which will make all threads have unique ids
        at net.openhft.lang.io.AbstractBytes.tryLockNanos4a(AbstractBytes.java:2344)
        at net.openhft.lang.io.AbstractBytes.tryLockNanosInt(AbstractBytes.java:2317)
        at net.openhft.lang.io.AbstractBytes.busyLockInt(AbstractBytes.java:2359)
        at net.openhft.lang.io.LockingViaMMapWithThreadIdMain.main(LockingViaMMapWithThreadIdMain.java:73)

Then, I tried the suggested solution of adding AffinitySupport.setThreadId() to the beginning of the code, but then it fails to acquire the lock,

Exception in thread "main" java.lang.IllegalStateException: Failed to acquire lock after 20.0 seconds.

Could you please help on this?

Thank you,

Support for JDK version > 8

Hello. I remember seeing at some point in time that you guys offered support for this deprecated version (not for free). Is that still true? If so, with the new 6 month release cycle of JDK will this be feasible? I'm not sure what kind of changes are necessary to make it work for versions grater then 8...

Thank you

Error in AbstractBytes.writeUTFΔ(long offset, int maxSize, @Nullable CharSequence s) when remaining() == 0

(lang-6.4.4.jar)

When I call writeUTFΔ with offset, I get the following exception:

java.lang.IllegalArgumentException: encoded string too long: 24 bytes, remaining=0
at net.openhft.lang.io.AbstractBytes.findUTFLength(AbstractBytes.java:838)
at net.openhft.lang.io.AbstractBytes.writeUTFΔ(AbstractBytes.java:803)

The exception happens when position() of the buffer is set to its size so remaining() returns 0.

I believe remaining() should not be relevant when offset is passed to the writeUTFΔ, only the sum of offset and string length is important.

Error in Stream implementation

AbstractBytes$BytesInputStream violates the contract for InputStreams.
Namely if I try to read a byte array which is too long, ith throws illegal argument exception.
A correct implementation would read avaiable bytes and return the number of bytes avaiable.
If no byte is avaiable, return -1.
Reason is, that AbstractBytes$BytesInputStream falls back to NativeBytes.read(..)

@Override
    public int read(@NotNull byte[] bytes, int off, int len) {
        if (len < 0 || off < 0 || off + len > bytes.length) // <== WRONG for streams
            throw new IllegalArgumentException();
        long left = remaining(); // <== should copy + return this
        if (left <= 0) return -1;
        int len2 = (int) Math.min(len, left);
        UNSAFE.copyMemory(null, positionAddr, bytes, BYTES_OFFSET + off, len2);
        positionAddr += len2;
        return len2;
    }

I can provide a fix. Do you agree on changing behaviour or will this have side effects ? Maybe I just fix it in the AbstractBytes$BytesInputStream read method to reduce risk.

writeUTFΔ(offset, maxSize, s) writes at current position when s == null

As can be seen in the code below, AbstractBuffer.writeUTFΔ calls position(offset) when s != null, and does not call it when s == null. This results in stepping on data at the current position() .

@OverRide
public void writeUTFΔ(long offset, int maxSize, @nullable CharSequence s) throws IllegalStateException {
assert maxSize > 1;
if (s == null) {
writeStopBit(-1);
return;
}
long strlen = s.length();
long utflen = findUTFLength(s, strlen);
long totalSize = IOTools.stopBitLength(utflen) + utflen;
if (totalSize > maxSize)
throw new IllegalStateException("Attempted to write " + totalSize + " byte String, when only " + maxSize + " allowed");
long position = position();
try {
position(offset);
writeStopBit(utflen);
writeUTF0(s, strlen);
} finally {
position(position);
}
}

Backward Compatibility with Java 1.6

Dear Team

It looks like that two of Classes -

net.openhft.lang.io.serialization.direct.Introspect

Line Number 34

return Long.compare(offset(first), offset(second));

and

net.openhft.lang.model.DataValueGenerator

Line Number 48:

int cmp = -Integer.compare(o1.getValue().heapSize(), o2.getValue().heapSize());

uses compare methods that are not backward compatible with Java 1.6 and hence does not compile. It works fine in Java 1.7

If possible please use some alternatives to make it backward compatible.

Thanks

DirectSerializationMetadataTest fails on 32bit systems

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest
Tests run: 6, Failures: 6, Errors: 0, Skipped: 0, Time elapsed: 0.116 sec <<< FAILURE! - in net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest
primitives2Metadata(net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest)  Time elapsed: 0.058 sec  <<< FAILURE!
java.lang.AssertionError: expected:<12> but was:<8>
    at org.junit.Assert.fail(Assert.java:88)
    at org.junit.Assert.failNotEquals(Assert.java:743)
    at org.junit.Assert.assertEquals(Assert.java:118)
    at org.junit.Assert.assertEquals(Assert.java:555)
    at org.junit.Assert.assertEquals(Assert.java:542)
    at net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest.primitives2Metadata(DirectSerializationMetadataTest.java:28)

primitives4Metadata(net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest)  Time elapsed: 0 sec  <<< FAILURE!
java.lang.AssertionError: expected:<4> but was:<8>
    at org.junit.Assert.fail(Assert.java:88)
    at org.junit.Assert.failNotEquals(Assert.java:743)
    at org.junit.Assert.assertEquals(Assert.java:118)
    at org.junit.Assert.assertEquals(Assert.java:555)
    at org.junit.Assert.assertEquals(Assert.java:542)
    at net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest.primitives4Metadata(DirectSerializationMetadataTest.java:46)

primitives6Metadata(net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest)  Time elapsed: 0.002 sec  <<< FAILURE!
java.lang.AssertionError: expected:<28> but was:<24>
    at org.junit.Assert.fail(Assert.java:88)
    at org.junit.Assert.failNotEquals(Assert.java:743)
    at org.junit.Assert.assertEquals(Assert.java:118)
    at org.junit.Assert.assertEquals(Assert.java:555)
    at org.junit.Assert.assertEquals(Assert.java:542)
    at net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest.primitives6Metadata(DirectSerializationMetadataTest.java:64)

primitives1Metadata(net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest)  Time elapsed: 0.001 sec  <<< FAILURE!
java.lang.AssertionError: expected:<4> but was:<8>
    at org.junit.Assert.fail(Assert.java:88)
    at org.junit.Assert.failNotEquals(Assert.java:743)
    at org.junit.Assert.assertEquals(Assert.java:118)
    at org.junit.Assert.assertEquals(Assert.java:555)
    at org.junit.Assert.assertEquals(Assert.java:542)
    at net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest.primitives1Metadata(DirectSerializationMetadataTest.java:19)

primitives3Metadata(net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest)  Time elapsed: 0.001 sec  <<< FAILURE!
java.lang.AssertionError: expected:<12> but was:<16>
    at org.junit.Assert.fail(Assert.java:88)
    at org.junit.Assert.failNotEquals(Assert.java:743)
    at org.junit.Assert.assertEquals(Assert.java:118)
    at org.junit.Assert.assertEquals(Assert.java:555)
    at org.junit.Assert.assertEquals(Assert.java:542)
    at net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest.primitives3Metadata(DirectSerializationMetadataTest.java:37)

primitives5Metadata(net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest)  Time elapsed: 0.002 sec  <<< FAILURE!
java.lang.AssertionError: expected:<12> but was:<16>
    at org.junit.Assert.fail(Assert.java:88)
    at org.junit.Assert.failNotEquals(Assert.java:743)
    at org.junit.Assert.assertEquals(Assert.java:118)
    at org.junit.Assert.assertEquals(Assert.java:555)
    at org.junit.Assert.assertEquals(Assert.java:542)
    at net.openhft.lang.io.serialization.direct.DirectSerializationMetadataTest.primitives5Metadata(DirectSerializationMetadataTest.java:55)


Results :

Failed tests:
  DirectSerializationMetadataTest.primitives2Metadata:28 expected:<12> but was:<8>
  DirectSerializationMetadataTest.primitives4Metadata:46 expected:<4> but was:<8>
  DirectSerializationMetadataTest.primitives6Metadata:64 expected:<28> but was:<24>
  DirectSerializationMetadataTest.primitives1Metadata:19 expected:<4> but was:<8>
  DirectSerializationMetadataTest.primitives3Metadata:37 expected:<12> but was:<16>
  DirectSerializationMetadataTest.primitives5Metadata:55 expected:<12> but was:<16>

Tests run: 6, Failures: 6, Errors: 0, Skipped: 0

README hard to read due to error in markdown syntax

The readme ( https://github.com/OpenHFT/Java-Lang/blob/master/README.md )currently has this:

####Example
ByteBuffer byteBuffer = ByteBuffer.allocate(SIZE);
ByteBufferBytes bytes = new ByteBufferBytes(byteBuffer);
for (long i = 0; i < bytes.maximumLimit(); i++)
bytes.writeLong(i);
for (long i = bytes.maximumLimit()-8; i >= 0; i -= 8) {
int j = bytes.readLong(i);
assert i == j;
}

I think the intention is to have this:

Example

ByteBuffer byteBuffer = ByteBuffer.allocate(SIZE);
ByteBufferBytes bytes = new ByteBufferBytes(byteBuffer);
for (long i = 0; i < bytes.maximumLimit(); i++)
    bytes.writeLong(i);
for (long i = bytes.maximumLimit()-8; i >= 0; i -= 8) {
    int j = bytes.readLong(i);
    assert i ==  j;
}

The extra space before the word "Example" makes a big difference here. (Well, actually, here at least the line breaks come out okay; in the actual readme even those disappear for some reason.)

Source files without license headers

Hi

The following source files are without license headers:

./lang/src/main/java/net/openhft/lang/io/VanillaBytesHash.java

./lang/src/test/java/net/openhft/lang/io/BytesTest.java
./lang/src/test/java/net/openhft/lang/values/BuySell.java
./lang/src/test/java/net/openhft/lang/values/BuySellValues.java
./lang/src/test/java/net/openhft/lang/values/EnumValuesTest.java
./lang/src/test/java/net/openhft/lang/values/StringValueTest.java

Please, confirm the licensing of code and/or content/s, and add license headers.

https://fedoraproject.org/wiki/Packaging:LicensingGuidelines?rd=Packaging/LicensingGuidelines#License_Clarification

Thanks in advance
Regards

Java 9 - tools.jar dependency

My project uses io.projectreactor.addons - reactor-logback. This in turn is depending on the OpenHFT Chronicle project, which in turn depends on OpenHFT lang.jar....which in turn is depending on 'tools.jar' which has been removed in Java 9.
Any advice on how to get round this dependency issue, or will there be a release resolving this issue.

A question about the commit 43da9545f

Hi,

I am doing some research on the evolution of your projects Java-Lang and Chronicle-Queue. I find that the committed date and the authored date are exactly same between this commit 43da954(43da954) and the commit 8d7dea7ef(OpenHFT/Chronicle-Queue@8d7dea7) of Chronicle-Queue. As far as I know, if we use the pick-up of Git, we can reserve the same authored date but with the different committed date. Would you please tell me how you applied this commit into another repository?
Thank you!

Best,
Tao Ji

New write() Methods in Bytes

Please consider adding positional buffer write methods to NativeBytes. Currently the only available method that comes close is:

    void write(long offset, byte[] bytes, int off, int len);

However I am writing lock free many to one and one to many ring buffers. Not having a positional index makes it very difficult because I can't use/move the position of any of the buffers.

If the following write methods

  void write(long offset, ByteBuffer bytes, int off, int len);
  void write(long offset, Bytes bytes, int off, int len);

were added it would make this use case possible.

I have the same problem for reading, I am using buffers but only byte[] is supported via:

void readFully(long offset, @NotNull byte[] bytes, int off, int len);

Getting Error after converting to Eclipse project

Hi all,

I download the project and convert project to eclipse project via
mvn eclipse:eclipse

but there are some strange characters at the end of the some methods like

ByteBufferBytesTest line 208
public void testWriteReadUTFΔ() {

Are there any problems with the source code?
Do I missing some points?

regards

Provide HugeArray.reindex(long index, T element)

This is probably more of a question but I did not find a related forum/mailing-list where to ask.

I'm wondering if it would be possible (and even wise) to provide a method HugeArray.reindex() such that an existing record can point to a different array index,

e.g

HugeArray<DataType> array = HugeCollections.newArray(DataType.class, size);
DataType dt = array.get(1);
array.reindex(2, dt); // now dt points to the second record

The implementation in HugeArrayImpl could be:

@Override
public void reindex(long index, T element) {
    DirectBytes bytes = (DirectBytes) ((Byteable) element).bytes();
    bytes.positionAndSize(index * size, size);
}

This would avoid the extra acquire(), copyFrom() and recycle() calls present in HugeArray.get():

@Override
public void get(long index, T element) {
    T t = acquire();
    DirectBytes bytes = (DirectBytes) ((Byteable) t).bytes();
    bytes.positionAndSize(index * size, size);
    ((Copyable) element).copyFrom(t);
    recycle(t);
}

I'm probably missing something -- maybe there's a reason such as memory locking or thread safety semantic that prevents this sort of pattern.

Again, apologies for asking a question through github issues.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.