Giter Site home page Giter Site logo

protobufjs / bytebuffer.js Goto Github PK

View Code? Open in Web Editor NEW
712.0 712.0 153.0 3.21 MB

A fast and complete ByteBuffer implementation using either ArrayBuffers in the browser or Buffers under node.js.

Home Page: http://dcode.io

License: Apache License 2.0

JavaScript 100.00%

bytebuffer.js's People

Contributors

adambom avatar adon-at-work avatar amilajack avatar bridgear avatar carnewal avatar cemerick avatar dcodeio avatar dretch avatar kidskilla avatar mhseiden avatar moxaj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bytebuffer.js's Issues

writeInt64 and similar methods should accept strings

I think it's common to store 64-bit integers as decimal strings. ProtoBuf.js already allows one to pass strings to e.g. uint64 fields, so maybe ByteBuffer.js should too.

This would save us from having to write code like:

payload.writeUint64(ByteBuffer.Long.fromString(this.steamID));

require returns dcodeIO in a standalone coffeeify deployment

After deploying a coffee app as follows:

browserify --transform coffeeify -s myapp src/index.coffee --debug > myapp-debug.js

The require for nodejs (first) is normal, but the required used by the browser is different. This is my work-around to include bytebuffer in both cases:

ByteBuffer = require('bytebuffer')
ByteBuffer = ByteBuffer.dcodeIO.ByteBuffer if ByteBuffer.dcodeIO

Slight Performance Gain for ByteBuffer.writeVarint64

This is similar to the ByteBuffer.writeVarInt32 suggestion mentioned previously. After using a JavaScript profiler, I noticed that when calling ByteBuffer.writeVarint64 there were calls to not only write but also read from the underlying buffer. Curious, I looked at the source code and noticed that the algorithm writes all the values to the buffer with the MSB set and then eventually reads back the first value to clear the MSB and then write the new value back to the buffer. This seemed inefficient. I therefore altered the algorithm to avoid the additional read and write and gained a slight performance increase.

Existing Function:

ByteBuffer.prototype.writeVarint64 = function(value, offset) {
    var advance = typeof offset === 'undefined';
    offset = typeof offset !== 'undefined' ? offset : this.offset;
    if (!(typeof value === 'object' && value instanceof Long)) value = Long.fromNumber(value, false);

    var part0 = value.toInt() >>> 0,
        part1 = value.shiftRightUnsigned(28).toInt() >>> 0,
        part2 = value.shiftRightUnsigned(56).toInt() >>> 0,
        size = ByteBuffer.calculateVarint64(value);

    this.ensureCapacity(offset+size);
    var dst = this.view;
    switch (size) {
        case 10: dst.setUint8(offset+9, (part2 >>>  7) | 0x80);
        case 9 : dst.setUint8(offset+8, (part2       ) | 0x80);
        case 8 : dst.setUint8(offset+7, (part1 >>> 21) | 0x80);
        case 7 : dst.setUint8(offset+6, (part1 >>> 14) | 0x80);
        case 6 : dst.setUint8(offset+5, (part1 >>>  7) | 0x80);
        case 5 : dst.setUint8(offset+4, (part1       ) | 0x80);
        case 4 : dst.setUint8(offset+3, (part0 >>> 21) | 0x80);
        case 3 : dst.setUint8(offset+2, (part0 >>> 14) | 0x80);
        case 2 : dst.setUint8(offset+1, (part0 >>>  7) | 0x80);
        case 1 : dst.setUint8(offset+0, (part0       ) | 0x80);
    }
    dst.setUint8(offset+size-1, dst.getUint8(offset+size-1) & 0x7F);
    if (advance) {
        this.offset += size;
        return this;
    } else {
        return size;
    }
};

Revised Function:

ByteBuffer.prototype.writeVarint64 = function(value, offset) {
    var advance = typeof offset === 'undefined';
    offset = typeof offset !== 'undefined' ? offset : this.offset;
    if (!(typeof value === 'object' && value instanceof Long)) value = Long.fromNumber(value, false);

    var part0 = value.toInt() >>> 0,
        part1 = value.shiftRightUnsigned(28).toInt() >>> 0,
        part2 = value.shiftRightUnsigned(56).toInt() >>> 0,
        size = ByteBuffer.calculateVarint64(value);

    this.ensureCapacity(offset+size);
    var dst = this.view;
    switch (size) {
        case 10: dst.setUint8(offset+9, ((part2 >>>  7) & 0x7F));
        case 9 : dst.setUint8(offset+8, (size !== 9  ? (part2       ) | 0x80 : (part2       ) & 0x7F));
        case 8 : dst.setUint8(offset+7, (size !== 8  ? (part1 >>> 21) | 0x80 : (part1 >>> 21) & 0x7F));
        case 7 : dst.setUint8(offset+6, (size !== 7  ? (part1 >>> 14) | 0x80 : (part1 >>> 14) & 0x7F));
        case 6 : dst.setUint8(offset+5, (size !== 6  ? (part1 >>>  7) | 0x80 : (part1 >>>  7) & 0x7F));
        case 5 : dst.setUint8(offset+4, (size !== 5  ? (part1       ) | 0x80 : (part1       ) & 0x7F));
        case 4 : dst.setUint8(offset+3, (size !== 4  ? (part0 >>> 21) | 0x80 : (part0 >>> 21) & 0x7F));
        case 3 : dst.setUint8(offset+2, (size !== 3  ? (part0 >>> 14) | 0x80 : (part0 >>> 14) & 0x7F));
        case 2 : dst.setUint8(offset+1, (size !== 2  ? (part0 >>>  7) | 0x80 : (part0 >>>  7) & 0x7F));
        case 1 : dst.setUint8(offset+0, (size !== 1  ? (part0       ) | 0x80 : (part0       ) & 0x7F));
    }
    if (advance) {
        this.offset += size;
        return this;
    } else {
        return size;
    }
};

In addition and similar to the ByteBuffer.writeVarint64 suggestion mentioned previously. I noticed that the unused bits (when encoding a large value) are set to 1's (for some values) due to the 0x7F mask. I've seen other implementations set/leave these unused bits as 0's. This caused a problem for me when I was comparing the binary data from a C++ backend to the same data encoding by the JavaScript frontend using this library. I have therefore changed my code to us the 0x0F mask instead to ensure they are always consistently 0's. Note that this change is based on the changes made above.

Existing Function (based on changes made above):

ByteBuffer.prototype.writeVarint64 = function(value, offset) {
...
    switch (size) {
        case 10: dst.setUint8(offset+9, ((part2 >>>  7) & 0x7F));
        ...
    }
...
};

Revised Function:

ByteBuffer.prototype.writeVarint32 = function(value, offset) {
...
    switch (size) {
        case 10: dst.setUint8(offset+9, ((part2 >>>  7) & 0x01));
        ...
    }
...
};

How can offset > limit?

The documentation for ByteBuffer#toBuffer states:

Will transparently ByteBuffer#flip this ByteBuffer if offset > limit but the actual offsets remain untouched.

I might be missing something but I can't imagine a scenario where offset would be greater that limit. I think it would be more useful to flip if limit equals offset (currently it returns an empty Buffer).

reading bytes without offset

I want to read bytes in LE order

// coordinates data (walk)
// 100c 70e3 76ba 73c3
// 100c 92e3 af8a 1266
// 100c b1e3 d34b c2b8
// 100c d0e3 6168 aec3

// set to LE
buf.order(true)

var w = buf.readUint32(0)
var x = buf.readUint32()
var y = buf.readUint32()
var z = buf.readUint32()

The docs says:

Offset to read from. Will use and increase ByteBuffer#offset by 4 if omitted.

So why am I getting the error:

RangeError: Illegal offset: 0 <= 8 (+4) <= 11

Oh should be readUint16() but how to make it read LE?

Writing Int32 to ByteBuffer yields strange result in DataView

Hello,

I'm not familiar with ByteBuffer (trying it for the first time due protobuf.js), but I'm trying to simply write an int32 to the buffer, convert it to a ArrayBuffers and see this value in 4 bytes in a file output. I'm pretty sure I'm doing some nasty mistake and it's not the library's fault, but this is one of the ways I've tried:

var ByteBuffer = dcodeIO.ByteBuffer;
//below is run inside a loop
var byteBuffer = new ByteBuffer(ByteBuffer.DEFAULT_CAPACITY, ByteBuffer.LITTLE_ENDIAN);
byteBuffer.append(new ByteBuffer().writeInt32(data.calculate()));

//finally, generate the file for download
var data = new Blob([new DataView(byteBuffer.toArrayBuffer())], {type: 'application/octet-stream'});
this.url = ($window.URL || $window.webkitURL).createObjectURL(data);

then in html there's a field to download the file, as in:

<a download="vat.scope" ng-href="{{ vatEditor.url }}">download</a>

For example, when data.calculate() yields 32, I'd expect to see 0x 20 00 00 00, but what I got is 0x 28 00 38 05 (which in little endian integer is 87556136). Anyone would be kind to help me pinpoint the mistake(s) here?

A more extensive explanation of what I'm trying to accomplish can be found here.

Question about Signed Variable Length Integers

I believe I need to encode signed variable length integers. I'm not very clear on this because I have only one example to work with (a small number). This may be enough. I found the following work-around and was able to get proper encoding:

    n=0x16
    n=(n << 1) ^ (n >> 31)
    b = new ByteBuffer(DEFAULT_CAPACITY=4, ByteBuffer.LITTLE_ENDIAN)
    b.writeVarint32(n)

This code encodes 0x16 to 0x2C. This is consistent with the C++ implementation I'm looking at and consistent with the Signed Integers section at here: https://developers.google.com/protocol-buffers/docs/encoding#varints

I'm am not sure how to decode it. Here is more information about the case that came up and a jsfiddle that may help: http://stackoverflow.com/questions/28351493/unexpected-result-in-bytebuffer-js-when-writing-a-variable-integer

npm bytebuffer.toArrayBuffer tests

The RH 6.x - linux

x bytebuffer.toArrayBuffer 0.968 ms 2 assertions

AssertionError: "undefined" == 18
  at null.equal (/root/testbuffer/node_modules/bytebuffer/node_modules/testjs/test.js:336:26)
  at Object.suite.toArrayBuffer [as test] (/root/testbuffer/node_modules/bytebuffer/tests/suite.js:698:14)
  at Domain.<anonymous> (/root/testbuffer/node_modules/bytebuffer/node_modules/testjs/test.js:291:37)
  • bytebuffer.toBuffer 0.410 ms 6 assertions

  • bytebuffer.printDebug 1.136 ms 1 assertions

  • bytebuffer.encode/decode/calculateUTF8Char 1.992 ms 39 assertions

  • bytebuffer.pbjsi19 1.087 ms 1 assertions

  • bytebuffer.encode/decode64 0.881 ms 16 assertions

  • bytebuffer.encode/decodeHex 0.281 ms 2 assertions

  • bytebuffer.encode/decodeBinary 0.654 ms 3 assertions

  • bytebuffer.NaN 0.752 ms 12 assertions

  • bytebuffer.ByteBuffer-like 0.519 ms 8 assertions

  • bytebuffer.commonjs 39.647 ms 2 assertions

  • bytebuffer.amd 34.666 ms 1 assertions

  • bytebuffer.shim 41.833 ms 2 assertions

    ByteBuffer(offset=0,markedOffset=-1,length=90,capacity=90)

    <48 65 6C 6C 6F 20 77 6F 72 6C 64 21 20 66 72 6F Hello.world!.fro
    6D 20 42 79 74 65 42 75 66 66 65 72 2E 6A 73 2E m.ByteBuffer.js.
    20 54 68 69 73 20 69 73 20 6A 75 73 74 20 61 20 .This.is.just.a.
    6C 61 73 74 20 76 69 73 75 61 6C 20 74 65 73 74 last.visual.test
    20 6F 66 20 42 79 74 65 42 75 66 66 65 72 23 70 .of.ByteBuffer#p
    72 69 6E 74 44 65 62 75 67 2E> rintDebug.

  • bytebuffer.helloworld 0.825 ms 0 assertions

    test ERROR 1 of 66 failed (231.359 ms 357 assertions)

Is this a known issue or configuration issue on my side?

appendTo verbage

I'm appending to a target, this is the target's offset right? So if my target has 10 bytes then I'm appending at offset 5-1 then I would be overwriting the last five bytes? In that case, maybe it should say any contents at and after the specified offset .... I think the audience may relate to buffers as lines on a page and would read them left to right.

ByteBuffer#appendTo(target, offset=)

Appends this ByteBuffer's contents to another ByteBuffer. This will overwrite 
any contents behind the specified offset up to the length of this ByteBuffer's data.

ByteBuffer.writeVString offset calculation incorrect.

Just testing the new v3 of ByteBuffer and noticed a bug when calculating the varint32 encoded byte length of the length of bytes of the string.

Currently, the value of 'k' is used to calculate 'l' before 'k' is set to a value. This means the value of 'i' is always 1 and therefore the offset calculation is not correct for longer strings and can throw an overflow exception if the buffer is not resized properly.

Just need to switch the two lines around to fix the problem:

// Existing:
ByteBuffer.prototype.writeVString = function(str, offset) {
    ...
    var start = offset,
        k, l;
    l = ByteBuffer.calculateVarint32(k);
    k = utf8_calc_string(str);
    offset += l+k;
    ...
};

// Revised:
ByteBuffer.prototype.writeVString = function(str, offset) {
    ...
    var start = offset,
        k, l;
    k = utf8_calc_string(str);
    l = ByteBuffer.calculateVarint32(k);
    offset += l+k;
    ...
};

easy way to read a bunch of characters at current offset?

I'm struggling to make it read an array of bytes after a couple of readInt8/16's, printDebug is showing the right offset, but I'm failing to retrieve only 7 bytes from the array. tried using copy, slice (it's returning the whole array), clone, along with mark, flip, compact.

WebPack deployment error: Cannot resolve module 'Long'

There is an issue deploying ByteBuffer 3.5.4 under webpack. Here are steps to reproduce:

Setup:

npm install webpack
npm install bytebuffer
echo "module.exports=require('bytebuffer')" > bb.js

Run:

webpack --entry ./bb.js bb_out.js
Hash: 1c6e193b5747ffb98b06
Version: webpack 1.8.11
Time: 271ms
   Asset    Size  Chunks             Chunk Names
bbout.js  192 kB       0  [emitted]  null
   [0] ./bb.js 37 bytes {0} [built]
    + 5 hidden modules

ERROR in ../~/bytebuffer/dist/ByteBufferAB.js
Module not found: Error: Cannot resolve module 'Long' in /home/jcalfee/bitshares/gui/node_modules/bytebuffer/dist
 @ ../~/bytebuffer/dist/ByteBufferAB.js 3271:8-87

v3 install failing

[email protected] install /home/majid/Projects/learn/bytebufjs/node_modules/bytebuffer/node_modules/memcpy
node-gyp configure build

make: Entering directory `/home/majid/Projects/learn/bytebufjs/node_modules/bytebuffer/node_modules/memcpy/build'
CXX(target) Release/obj.target/memcpy/src/memcpy.o
../src/memcpy.cc: In function โ€˜v8::Handlev8::Value memcpy(const v8::Arguments&)โ€™:
../src/memcpy.cc:111:5: error: โ€˜memmoveโ€™ was not declared in this scope
);
^
make: *** [Release/obj.target/memcpy/src/memcpy.o] Error 1

ByteBuffer.calculateUTF8String() method renamed in minified version.

The ByteBuffer.calculateUTF8String() method is listed as a public API in the documentation. In the minified version, the name of this method has been minified to ByteBuffer.a() which effectively hides the method from public use. This will also cause errors for those coding against the non-minified version of ByteBuffer and then switching to the minified version of ByteBuffer for production.

Abstract class for browser and node

Regarding #8, ByteBuffer makes it really easy to work with Buffer, but the conversion in and out (along with ProtoBuf 'overhead') makes the code really slow (comparing ByteBuffer to Buffer, i'm getting around 60x slower magnitude). When using Transform streams on Node, you notice how slow is to do ByteBuffer.wrap, then bytebuffer.toBuffer(), or even bytebuffer.append using a native Buffer instead of another ByteBuffer instance.

So, I'm proposing a way, while maintaining browser compatibility, create an intermediate AbstractClass that implement each of the current ByteBuffer implementations, but instead of using this.view.getUint8 for example, it will become this.adapter.getUint8, that might map to either Buffer.prototype.readUint8 in node or instance.view.getUint8 in the browser (for DataView). This shouldn't be hassle to implement, but the performance gains will be major in Node.

An example, wrapping a simple 80 bytes "native" Buffer with ByteBuffer and walking through it, takes around 20ms. Doing the same "manually" walking "native" Buffer takes <1ms. I'm using it mainly for https://github.com/pocesar/node-protosocket and the numbers aren't impressive. Since node TCP connections are really performant, having the parser be the bottleneck is a shoot in the foot. Even while using Streams.Transform performs better than ByteBuffer in Node atm (around 2-3ms)...

If you think this is the way to go, I'll rewrite ByteBuffer to work with Node in a better way, while having to do no changes to the ByteBuffer current API, but will make it able to only use ArrayBuffer when Buffer isn't available (which could also "plugin" other cross plataform buffer libraries too on ByteBuffer).

length property or method to ByteBuffer objects

Currently, the only way to check the length of a ByteBuffer object is to subtract offset from limit. This is somewhat unwieldy, so it would be helpful to have a length property or method that calculates that.

Create from Buffer

Is there a way to create a byte buffer from an existing buffer (node Buffer or ArrayBuffer).

I am using the ws module which when it gets a message in node will be a Buffer and in browser will be an ArrayBuffer. I would like to create a ByteBuffer using that already existing buffer as the backing store:

var WS = require('ws');

var ws = new WS(...);

ws.binaryType = 'arraybuffer'; // node buffer on node, arraybuffer on browsers

ws.addEventListener('message', function (event) {
    // here, event.data is node buffer or an arraybuffer
    // it would be nice to just create a bytebuffer from it
    var buffer = new ByteBuffer(event.data);
    // or
    var buffer = ByteBuffer.fromBuffer(event.data);
});

I looked around briefly but didn't see anything in the API to let me do this (without changing the properties manually). Ideally it would never create a Buffer/ArrayBuffer object because I am giving it one instead.

Right now I am doing this as a workaround:

var bb = new ByteBuffer(0);
bb.buffer = event.data;
// TODO: I would rather use a typed array here for the view, see:
// https://github.com/dcodeIO/ByteBuffer.js/issues/16#issuecomment-110802403
bb.view = process.browser ? new DataView(event.data) : null;
bb.limit = process.browser ? event.data.byteLength : event.data.length;

toBuffer(true) gives a ReferenceError

Reference Error: b is undefined
On line 2593 of ByteBufferNB.js

Essentially line 2593 should be:
return buffer;

I would submit a pull request but I am unsure if this is replicated anywhere else in the codebase.

Using Buffers under node.js

Working with ArrayBuffers under node.js is subobtimal for a few reasons:

  1. Pretty much everything in the node universe uses Buffers, so copying between ArrayBuffers and Buffers is required regularily.
  2. Copying between Buffers and ArrayBuffers can be ridiculously slow (see).

Thus, building two different but API-compatible versions of ByteBuffer.js (one for node using Buffers and one for the browser using ArrayBuffers) is what we want from a performance perspective, while keeping in mind to make it a robust and developer-friendly library that can be used in the same way on servers and browsers, regardless of the backing buffer's type.

ByteBuffer 3 will do exactly that besides making the API more intuitive.

Unpacking

Is it possible to have something like:

 data.readString(16).as('username')
 data.readString(16).as('password')
 // request
 data.unpack()
 // returns
 > { username: 'user', password: 'pass' }

printDebug problem in Chrome

printDebug in Chrome doesn't work.

if (typeof out !== 'function') out = console.log;

should be replaced with:

if (typeof out !== 'function') out = console.log.bind(console);

Some valid negative int32 values cause an error when being decoded.

https://developers.google.com/protocol-buffers/docs/encoding states that:

If you use int32 or int64 as the type for a negative number, the resulting varint is always ten bytes long

ByteBuffer.prototype.readVarint32 rejects values that occupy more than ByteBuffer.MAX_VARINT32_BYTES = 5 bytes. Instead of rejecting these values it should discard the extra bits - like https://code.google.com/p/protobuf/source/browse/trunk/src/google/protobuf/io/coded_stream.cc?r=417#294

I am not sure that int32 values must always occupy ten bytes... because in practice even google's implementation seems to accept values without the extra bytes.

Code coverage

Code coverage is almost perfect, Istanbul is reporting:

Statements: 88.32% (839 / 950)      
Branches: 74.66% (442 / 592)      
Functions: 95.51% (85 / 89)      
Lines: 90.28% (808 / 895)      
Ignored: none

Aim for 100% of code coverage is good thing since it's the "base" of Protobuf.js

New function that recalculates the length

Maybe passing an option to reset could 'reset' the offset but recalculate the length from this.array.byteLength. Right now, it's setting the length to 0

        ByteBuffer.prototype.reset = function() {
            if (this.array === null) {
                throw(new Error(this+" cannot be reset: Already destroyed"));
            }
            if (this.markedOffset >= 0) {
                this.offset = this.markedOffset;
                this.markedOffset = -1;
            } else {
                this.offset = 0;
                this.length = 0;
            }
            return this;
        };

compact() is an expensive function just to reassign the length to the current buffer.

this did it:

  ByteBuffer.prototype.refresh = function(toBegin){
    this.length = this.array == null ? 0 : this.array.byteLength;
    if (toBegin === true) {
      this.offset = 0;
    }
    return this;
  };

the reason is that I do a lot of reading through many functions, that are incremental, but after a while I need to reset the offset but keep the original length.

using typed-arrays-polyfill error

using typed-arrays-polyfill to send msg by websocket, and capture package by wireshark,
the opcode of package is text(0001). without using polyfill, the opcode value is binary (0010) .

ps:
already set websocket binaryType is "arraybuffer";

Don't use instanceof for isByteBuffer

Using instanceof for the isByteBuffer check can break in certain situations.

For example,, I have a lib that uses a specific version of ByteBuffer. Then I have a project that uses a different version of ByteBuffer.

If my lib returns a ByteBuffer, and then I do ByteBuffer.isByteBuffer(value) in my project it will return false because the prototype of the two version is different. This is a problem because you can't guarantee the two prototypes are always the same.

Instead of doing instanceof there should be a duck type check:

function ByteBuffer() {
    this.__isByteBuffer = true;
}

ByteBuffer.isByteBuffer(bb) {
    return (bb && bb.__isByteBuffer);
}

Offset is not correct after calling ByteBuffer#writeCString with data already in the buffer

After calling writeCString on a buffer with data already in it, the offset is set to the length of the string + 1, instead of offset + length + 1.

Code to reproduce:

var str = "This is a test string.";
var body = new ByteBuffer(BUFFER_MAX_SIZE, ByteBuffer.LITTLE_ENDIAN);
body.writeUint32(55555); // 4 bytes, offset = 4
body.writeUint16(1234); // 2 bytes, offset = 6
body.writeCString(str); // 23 bytes, offset should = 29
body.writeUint32(123456); // 4 bytes, offset should = 33

console.log("Expected offset: %d, actual: %d", 4 + 2 + str.length + 1 + 4, body.offset); // Expected offset: 33, actual: 27

body.flip();
console.log("Uint32: %d, Uint16: %d string: %s, Uint32: %d", body.readUint32(), body.readUint16(), body.readCString(), body.readUint32()); // Uint32: 55555, Uint16: 1234 string: This is a test st@?โ˜บ, Uint32: 46

confusion when working with several bytebuffer objects

I'm trying to do some basic things like building smaller buffers and append them to larger ones. I'm having a hard time with extra data in the buffer I did not add:

http://stackoverflow.com/questions/26486525/how-do-i-get-rid-of-extra-data-in-the-nodejs-bytebuffer

One way I found was to b = b.copy(0, b.offset) which is fine except when I go to append more data b.append(more_data) it starts with the first byte and over-writes data. Should I then go to b.append(more_data, 'binary', b.offset)?

Can you have a simple example that shows how to put buffers together and ensure there is no extra unexpected data? I think that will help a lot. Thank you...

Minified exceptions relating to String.fromCodePoint and String.prototype.codePointAt shims.

When switching to the minified version of the ByteBufferAB.js library, exceptions are being thrown that look to be related to errors in minification of the String.fromCodePoint and String.prototype.codePointAt shims:

First error is on line 55, character 116:

// TypeError: Unable to get property 'apply' of undefined or null reference
... c=String.a.apply(String,f) ...

Which looks to be from the method ByteBuffer.readVString() and should be a call to 'String.fromCodePoint.apply(...)' instead of 'String.a.apply(...)'

Second error is on line 54, character 124:

// TypeError: Object doesn't support property or method 'b'
... b+=v(a.b(f),this,b) ...

Which looks to be from the ByteBuffer.writeVString() and should be a call to 'a.codePointAt(...)' instead of 'a.b(...)'

So it looks like the 'fromCodePoint' and 'codePointAt' names are being minified when either they shouldn't or the minification isn't renaming the shims properly. I think this is because the shims refer to themselves through string names like:

String.prototype["codePointAt"] = codePointAt;

Slight Performance Gain for ByteBuffer.writeVarint32

After using a JavaScript profiler, I noticed that when calling ByteBuffer.writeVarint32 there were calls to not only write but also read from the underlying buffer. Curious, I looked at the source code and noticed that the algorithm writes a value to the buffer with the MSB set and then eventually reads back the value to clear the MSB and then write the value back to the buffer. This seems inefficient. I therefore altered the algorithm to avoid the reads and additional writes and gained a slight performance increase.

Existing Function:

ByteBuffer.prototype.writeVarint32 = function(value, offset) {
    var advance = typeof offset === 'undefined';
    offset = typeof offset !== 'undefined' ? offset : this.offset;
    // ref: http://code.google.com/searchframe#WTeibokF6gE/trunk/src/google/protobuf/io/coded_stream.cc
    value = value >>> 0;
    this.ensureCapacity(offset+ByteBuffer.calculateVarint32(value));
    var dst = this.view,
        size = 0;
    dst.setUint8(offset, value | 0x80);
    if (value >= (1 << 7)) {
        dst.setUint8(offset+1, (value >> 7) | 0x80);
        if (value >= (1 << 14)) {
            dst.setUint8(offset+2, (value >> 14) | 0x80);
            if (value >= (1 << 21)) {
                dst.setUint8(offset+3, (value >> 21) | 0x80);
                if (value >= (1 << 28)) {
                    dst.setUint8(offset+4, (value >> 28) & 0x7F);
                    size = 5;
                } else {
                    dst.setUint8(offset+3, dst.getUint8(offset+3) & 0x7F);
                    size = 4;
                }
            } else {
                dst.setUint8(offset+2, dst.getUint8(offset+2) & 0x7F);
                size = 3;
            }
        } else {
            dst.setUint8(offset+1, dst.getUint8(offset+1) & 0x7F);
            size = 2;
        }
    } else {
        dst.setUint8(offset, dst.getUint8(offset) & 0x7F);
        size = 1;
    }
    if (advance) {
        this.offset += size;
        return this;
    } else {
        return size;
    }
};

Revised Function:

ByteBuffer.prototype.writeVarint32 = function(value, offset) {
    var advance = typeof offset === 'undefined';
    offset = typeof offset !== 'undefined' ? offset : this.offset;
    // ref: http://code.google.com/searchframe#WTeibokF6gE/trunk/src/google/protobuf/io/coded_stream.cc
    value = value >>> 0;
    this.ensureCapacity(offset+ByteBuffer.calculateVarint32(value));
    var dst = this.view,
        size = 0;
    if (value >= (1 << 7)) {
        dst.setUint8(offset, value | 0x80);
        if (value >= (1 << 14)) {
            dst.setUint8(offset+1, (value >> 7) | 0x80);
            if (value >= (1 << 21)) {
                dst.setUint8(offset+2, (value >> 14) | 0x80);
                if (value >= (1 << 28)) {
                    dst.setUint8(offset+3, (value >> 21) | 0x80);
                    dst.setUint8(offset+4, (value >> 28) & 0x7F);
                    size = 5;
                } else {
                    dst.setUint8(offset+3, (value >> 21) & 0x7F);
                    size = 4;
                }
            } else {
                dst.setUint8(offset+2, (value >> 14) & 0x7F);
                size = 3;
            }
        } else {
            dst.setUint8(offset+1, (value >> 7) & 0x7F);
            size = 2;
        }
    } else {
        dst.setUint8(offset, value & 0x7F);
        size = 1;
    }
    if (advance) {
        this.offset += size;
        return this;
    } else {
        return size;
    }
};

On a side note for this same method, I noticed that the unused bits (when encoding a large value) are set to 1's (for some values) due to the 0x7F mask. I've seen other implementations set/leave these unused bits as 0's. This caused a problem for me when I was comparing the binary data from a C++ backend to the same data encoding by the JavaScript frontend using this library. I have therefore changed my code to us the 0x0F mask instead to ensure they are always consistently 0's.

Existing Function:

ByteBuffer.prototype.writeVarint32 = function(value, offset) {
...
                if (value >= (1 << 28)) {
                    dst.setUint8(offset+4, (value >> 28) & 0x7F);
                    size = 5;
                } else {
...
};

Revised Function:

ByteBuffer.prototype.writeVarint32 = function(value, offset) {
...
                if (value >= (1 << 28)) {
                    dst.setUint8(offset+4, (value >> 28) & 0x0F);
                    size = 5;
                } else {
...
};

Error

Hello, im having this error:
RangeError: Illegal range: 0 <= 21 <= 16 <= 32

My code is:
var cmd = 'DYD,000000#';
var command = new ByteBuffer();
command.writeByte(0x78);
command.writeByte(0x78); // size
command.writeByte(5 + cmd.length); // size
command.writeByte(0x80); //Protocol Number
command.writeByte(4 + cmd.length); //Length of Command
command.writeByte(0x91); // Server Flag Bit end
command.writeUTF8String(cmd)
command.writeShort(crc.crc16ccitt(command.slice(2, 4)));
command.writeByte(0x0D); command.writeByte(0x0A); // ending
console.dir(command)
socket.write(command.toString('hex'))

ByteBuffer.writeVarint64 truncating value to 32 bits.

Whilst testing the new version, I've noticed that when a number that is greater than 32 bits is written as a Varint64, the code is truncating the value to 32 bits.

ByteBuffer.prototype.writeVarint64 = function(value, offset) {
    ...
    if (!this.noAssert) {
        if (typeof value === 'number' && value % 1 === 0)
            value |= 0; // This is truncating the value to 32 bits.
        ...
    }
    ...
};

I have tested this in IE 11, Chrome 35 and FF 31 with the same result.

An example is:

var value = 13270440001;
value |= 0; // value is now: 385538113

I've also noticed that this same operation is used a few times throughout the new version and I'm wondering if similar issues will arise. I can see that the resize method performs the same operation on the capacity and thus this would artificially limit the capacity.

Documentation

It seems to not load and would be great to have more example usage.

Error: Cannot load file http://raw.github.com/dcodeIO/ByteBuffer.js/master/docs/module-ByteBuffer.html

Weird offset computation when using the ByteBuffer.writeCString() method

I'm using ByteBuffer to fill a buffer with string data:

d = new dcodeIO.ByteBuffer()
Object { buffer: ArrayBuffer, view: DataView, offset: 0, markedOffset: -1, limit: 16, littleEndian: false, noAssert: false }
d.writeCString("aaa")
Object { buffer: ArrayBuffer, view: DataView, offset: 4, markedOffset: -1, limit: 16, littleEndian: false, noAssert: false }
d.writeCString("bbb")
Object { buffer: ArrayBuffer, view: DataView, offset: 4, markedOffset: -1, limit: 16, littleEndian: false, noAssert: false }
d.writeCString("ccc")
Object { buffer: ArrayBuffer, view: DataView, offset: 4, markedOffset: -1, limit: 16, littleEndian: false, noAssert: false }

As you can see the offset is only updated after the first operation, while all successive operations will leave the same offset, so that the third write will overwrite the content written by the second one.

If I use writeUTF8String() it is working as expected, that is the offset is correctly updated after the second write.
Is it a defect or am I missing something obvious?

TIA

char array

How would this be on byte buffer?

UINT8 data[32]

ByteBuffer.fromHex returns an empty buffer for non-hex characters

Using version 3.5.5 with nodejs v0.12.4

var ByteBuffer = require('bytebuffer')
var a = ByteBuffer.fromHex('vvzzkk', true, false)
console.log(a);
var b = ByteBuffer.fromHex('vvzzkk', true, true)
console.log(b);

prints

{ buffer: <Buffer >,
  offset: 0,
  markedOffset: -1,
  limit: 0,
  littleEndian: true,
  noAssert: true }
{ buffer: <Buffer >,
  offset: 0,
  markedOffset: -1,
  limit: 0,
  littleEndian: true,
  noAssert: true }

I would expect it to throw an error for non-hex characters, given https://github.com/dcodeIO/ByteBuffer.js/blob/master/src/encodings/hex.js#L65

ByteBuffer.wrap should accept an array

Is there any reason why ByteBuffer.wrap can't use a plain array? If it can already use a string, why couldn't use an array?

I'm saying this because I've created a new prototype function called readBytes that looks as the following:

  ByteBuffer.prototype.readBytes = function (length, offset){
    length = typeof length !== 'undefined' ? length : this.length;
    offset = typeof offset !== 'undefined' ? offset : (this.offset+=length)-length;
    if (offset + length > this.array.byteLength) {
      throw(new Error('Cannot read ' + length + ' bytes from ' + this + ' at ' + offset + ': Capacity overflow'));
    }

    var out = new ByteBuffer(length, this.littleEndian); // this instead of []
    for (var i = 0; i < length; i++) {
      out.writeUint8(this.view.getInt8(offset + i, this.littleEndian)); // this instead of out.push()
    }
    out.flip();

    return out; // returns a bytebuffer instead of array, since cant use ByteBuffer.wrap on array
  };

ByteBuffer.wrap errors out with Cannot wrap buffer of type object, Array. I know it expects a native Buffer, ByteBuffer or ArrayBuffer, but I wanted to keep it "neutral", so I can use it on the browser, is it possible or I should use a typed array anyway?

Stack overflow with large data in toBinary

I'm running into a problem where ByteBuffer.toBinary will run into a stack overflow when the data it's encapsulating is large enough.

The problem happens right at the end, with String.fromCharCode.apply(String, out), the size of out is too large for a single stack frame, so it throws.

I imagine looping through out and calling fromCharCode on each code individually would be too slow, particularly in this use case, but perhaps when out is large enough the array could be partitioned to avoid this?

readUTF8StringBytes doesn't decode characters larger than 16 bits correctly

Simple to reproduce - write an emoji that is larger than 16 bits (such as a smiley face) into a ByteBuffer using writeVString(). Then try to read it out using readVString(). The character is corrupted - seems that String.fromCharCode() isn't doing the right thing here. Note that if you (in node) call toBuffer().toString() after writing the emoji to the ByteBuffer, it will be internally correct (writing works correctly).

Workaround in node is to replace readVString with an implementation that reads the size and converts a slice to a native Buffer.

Allow any string encoding

The Node.js Buffer can encode and decode strings using various encodings. And the iconv-lite library adds additional encodings. ByteBuffer should be flexible enough to handle any encodings that ByteBuffer can handle out of the box and additional encodings added by other libraries (e.g. iconv-lite). Currently, ByteBuffer encodes and decodes only UTF-8.

Dependency of "latest" on Long is breaking

Recently while doing some npm installs we noticed a significant break in our dependencies. Unfortunately our versions require [email protected] - which then require bytebuffer@>=2.2 <3 - which then requires long@latest.

This appears to be a breaking change (rightly so, our old version of long was using 1.2 and the new one is 2.0) - but we don't have control over that per say. Any ideas on how to update bytebuffer to a version that uses the working long 1.2? Or is there something we need to change on our end?

What we noticed was:

Object function (low, high, unsigned) {

          /**
            * The low 32 bits as a signed value.
            * @type {number}
            * @expose
            */
          this.low = low|0;

          /**
            * The high 32 bits as a signed value.
            * @type {number}
            * @expose
            */
          this.high = high|0;

          /**
            * Whether unsigned or not.
            * @type {boolean}
            * @expose
            */
          this.unsigned = !!unsigned;
      } has no method 'from28Bits'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.