Giter Site home page Giter Site logo

gloss's People

Contributors

amalloy avatar bo-tato avatar drsnyder avatar duck1123 avatar geoffsalmon avatar geremih avatar ibodrov avatar japonophile avatar kingmob avatar llasram avatar mithrandi avatar ninjudd avatar seancorfield avatar skynet-gh avatar slipset avatar sunng87 avatar vonzeppelin avatar ztellman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gloss's Issues

Non byte-aligned codecs

Is there a way to handle codecs which are not byte-aligned? I'm trying to read a format which represents a vector using a 1-bit header before each value. The header determines whether there is another value to read.

Fails to compile with clojure 1.8

When I compile project under clojure 1.8, i get this error

#error {
 :cause IllegalName: compile__stub.gloss.data.bytes.core.gloss.data.bytes.core/MultiBufferSequence
 :via
 [{:type clojure.lang.Compiler$CompilerException
   :message java.lang.NoClassDefFoundError: IllegalName: compile__stub.gloss.data.bytes.core.gloss.data.bytes.core/MultiBufferSequence, compiling:(gloss/data/bytes/core.clj:78:1)
   :at [clojure.lang.Compiler analyzeSeq Compiler.java 6875]}
  {:type java.lang.NoClassDefFoundError
   :message IllegalName: compile__stub.gloss.data.bytes.core.gloss.data.bytes.core/MultiBufferSequence
   :at [java.lang.ClassLoader preDefineClass ClassLoader.java 654]}]

and so on...

Similar issue is clj-commons/aleph#189.
Please upgrade potemkin.

data loss with io/decode-stream

I was coding this simple problem to learn clojure, it's just a server that reads newline separated json requests over tcp and responds to them. My code using gloss is here. I found gloss as that's what the aleph examples used though later I realized for this task there's simpler to-line-seq from byte-streams. I ran into a bug with decode-stream that it seems is already known
I'm totally new at clojure and manifold streams and everything so this is my understanding of the bug and my fix, that may be totally wrong:
decode-stream reads from some stream and writes to out, as soon as the input stream is drained, it calls close on out. As calls to put on out are non-blocking, maybe the last call to put! on out hasn't written yet at the time we close out
Edit: I think my understanding was wrong it seems if you call put! or put-all! on a stream and then immediately close! on a stream those puts still get written before it is closed. I think the race condition is that it reads bytes from src, parses it, and then calls put on dst, so src can be drained and trigger close call on dst, while it is still parsing and before it's called put!
here is the code that closes it:

(s/connect-via src f dst {:downstream? false})
(s/on-drained src #(do (f []) (s/close! dst)))

As far as I understand manifold streams, the only effect of :downstream? option is whether manifold will automatically close the stream for us, when manifold does it automatically it does wait for input stream to be drained or closed and the pending writes to out to finish writing before closing out. I just deleted those two lines and replaced with:

(s/connect-via src f dst)

Like I said I don't know clojure or manifold so I'm not sure if there is undesired consequence of leaving downstream as default of true, but it seems for my program it fixes this bug. All the tests on protohackers pass after I made that change.

Also with this little program I ran into this bug that is already fixed in git, but not in the published version of the package. Seems weird that with a very simple usecase (so I assume it happens also in plenty of real programs) I run into two bugs that are already known, that in a real program could cause subtle hard to debug issues. It seems important to at least add a note in the documentation if don't have the time to make a proper fix.

Other than the painful experience debugging, I really like the library! it looks very nice for simply dealing with binary protocols

header as body

Heres an interesting situation. Check out bitcoin's variable-length-integers.

In the first case, ubyte < 0xfd, the body takes the value of the header. I worked around this by closing over the header value here

The problem is, I don't know how to go the other way, perhaps someway to return an empty header? Or maybe a function which returns both header and body?

What do you think?

Nested codecs?

Hi, I'm trying to use gloss to import binary STL data - see here for the format: https://en.wikipedia.org/wiki/STL_(file_format)

Firstly I tried creating:
(defcodec point-spec
  {:x :float32-le
   :y :float32-le
   :z :float32-le})

(defcodec triangle-spec
  {:normal point-spec
   :points [point-spec point-spec point-spec]
   :attributes :uint16-le})

(defcodec stl-spec
  {:header (string :ascii :length 80)
   :triangles (repeated triangle-spec :prefix :uint32-le)})

(decode-all 
  stl-spec 
  (to-byte-buffer 
    (io/file "some-valid-3d-file.stl")))

And although I get the desired structure back, the floats are all garbled.

Next, I tried embedding the point and triangle as def's inside the stl-spec:
(def point-spec
  {:x :float32-le
   :y :float32-le
   :z :float32-le})

(def triangle-spec
  {:normal point-spec
   :points [point-spec point-spec point-spec]
   :attributes :uint16-le})

(defcodec stl-spec
  {:header (string :ascii :length 80)
   :triangles (repeated triangle-spec :prefix :uint32-le)})  

Still no joy... the float values are similarly out by massive magnitudes.

The only way I can get sane values for the floats is:
(def point-spec
  [:float32-le :float32-le :float32-le])

(def triangle-spec
  (concat 
    point-spec      ; normal
    point-spec      ; vertex 1
    point-spec      ; vertex 2
    point-spec      ; vertex 3
    [:uint16-le]))  ; attributes

(defcodec stl-spec
  {:header (string :ascii :length 80)
   :triangles (repeated triangle-spec :prefix :uint32-le)})  

This gives the correct values for the floats, but in a list rather than nested maps & arrays. I can munge the data out, but the fact that the nesting almost works this seems like an off-by-one type bug to me ... or am I doing something wrong in the first two code snippets?

Cheers,
R

Support int24 ?

Hi ztallman,

Is it possible to support int24 as a primitive? In Netty's ChannelBuffer, there is a getMedium and setMedium for 3-bytes integer. It's also found in many protocols such Diameter.

String padding

Proposed enhancement, and suggested syntax:

(string :us-ascii :length 12 :padding \0)

Example for padded string of set length (in this case null-padded to length 12). Not sure how this would imply existing :suffix or :delimiter settings (I am still learning gloss).

Proposal would help me support bitcoin/message protocol here where it is specified as follows:

Field Size: 12
Description: command
Data type: char[12]
Comments: ASCII string identifying the packet content, NULL padded (non-NULL padding results in packet rejected)

Thanks!

decode finite-frame error

Hi, I's using gloss 0.2.2, when I decode finite-frame:

AssertionError Assert failed: success gloss.data.bytes/wrap-finite-block/fn--10445/fn--10446 (bytes.clj:76)

user> (defcodec fr (finite-frame
             :uint32
             {:client-id :int64 :id :uint32
              :cmd :byte :body (repeated :byte :prefix :none) }))
#'user/fr

user> (encode fr {:client-id 1 :id 2 :cmd 3 :body []})
(#<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=4 cap=4]> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=0 cap=0]> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=8 cap=8]> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=1 cap=1]> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=4 cap=4]>)

user> (decode fr *1)
AssertionError Assert failed: success  gloss.data.bytes/wrap-finite-block/fn--10445/fn--10446 (bytes.clj:76)

Type depending on more fields.

I implementing protocol, which has multiple byte codes for one type. For example digital message may have codes in range of 0x90 - 0x9F. Naive function for generating that type can looks like this:

(defn digital-message
  [port]
  (bit-or 0x90 port))

I would like to create encoders and decoders which looks like that:
decoder:

(decode-all decoder (to-byte-buffer [0x91 0x05 0x98 0x06])) ; [{:type :digital-message :port 1 :some-data 5}
                                                            ;  {:type :digital-message :port 8 :some-data 6}]

encoder:

(encode encoder {:type :digital-message :port 2 :some-data 5}) ; [0x92 0x05]

Unfortunaly, I don't know how to implement it and combine with gloss/header.

Thanks for any response.

decode-stream-header not working

When trying to use decode-stream-header, I get the following exception:

IllegalArgumentException cannot convert clojure.lang.PersistentArrayMap to sink  manifold.stream/->sink (stream.clj:70)
user=> (.printStackTrace *e)
java.lang.IllegalArgumentException: cannot convert clojure.lang.PersistentArrayMap to sink, compiling:(NO_SOURCE_FILE:1:8)
    at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3628)
    at clojure.lang.Compiler$DefExpr.eval(Compiler.java:439)
    at clojure.lang.Compiler.eval(Compiler.java:6787)
    at clojure.lang.Compiler.eval(Compiler.java:6745)
    at clojure.core$eval.invoke(core.clj:3081)
    at clojure.main$repl$read_eval_print__7099$fn__7102.invoke(main.clj:240)
    at clojure.main$repl$read_eval_print__7099.invoke(main.clj:240)
    at clojure.main$repl$fn__7108.invoke(main.clj:258)
    at clojure.main$repl.doInvoke(main.clj:258)
    at clojure.lang.RestFn.invoke(RestFn.java:1096)
    at clojure.tools.nrepl.middleware.interruptible_eval$evaluate$fn__608.invoke(interruptible_eval.clj:43)
    at clojure.lang.AFn.applyToHelper(AFn.java:152)
    at clojure.lang.AFn.applyTo(AFn.java:144)
    at clojure.core$apply.invoke(core.clj:630)
    at clojure.core$with_bindings_STAR_.doInvoke(core.clj:1868)
    at clojure.lang.RestFn.invoke(RestFn.java:425)
    at clojure.tools.nrepl.middleware.interruptible_eval$evaluate.invoke(interruptible_eval.clj:41)
    at clojure.tools.nrepl.middleware.interruptible_eval$interruptible_eval$fn__649$fn__652.invoke(interruptible_eval.clj:171)
    at clojure.core$comp$fn__4495.invoke(core.clj:2437)
    at clojure.tools.nrepl.middleware.interruptible_eval$run_next$fn__642.invoke(interruptible_eval.clj:138)
    at clojure.lang.AFn.run(AFn.java:22)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: cannot convert clojure.lang.PersistentArrayMap to sink
    at manifold.stream$__GT_sink.invoke(stream.clj:70)
    at manifold.stream$connect_via.invoke(stream.clj:494)
    at manifold.stream$connect_via.invoke(stream.clj:492)
    at gloss.io$decode_stream_headers.doInvoke(io.clj:185)
    at clojure.lang.RestFn.invoke(RestFn.java:423)
    at asimov.tcpros$subscribe_BANG_$fn__1263.invoke(tcpros.clj:71)
    at manifold.deferred$fn__7431$chain___7452.invoke(deferred.clj:840)
    at asimov.tcpros$subscribe_BANG_.invoke(tcpros.clj:71)
    at asimov.api$sub_BANG_$iter__86__90$fn__91$fn__92.invoke(api.clj:95)
    at asimov.api$sub_BANG_$iter__86__90$fn__91.invoke(api.clj:85)
    at clojure.lang.LazySeq.sval(LazySeq.java:40)
    at clojure.lang.LazySeq.seq(LazySeq.java:49)
    at clojure.lang.RT.seq(RT.java:507)
    at clojure.core$seq__4128.invoke(core.clj:137)
    at asimov.api$sub_BANG_.invoke(api.clj:99)
    at clojure.lang.AFn.applyToHelper(AFn.java:160)
    at clojure.lang.AFn.applyTo(AFn.java:144)
    at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3623)
    ... 23 more
nil

Validations

Is there a way to add validations to the data being decoded?

When using decode-stream with aleph it could be useful to be able to disconnect the client when it sends a field that we are not expecting.

For example, a protocol might define a SFD (start frame delimiter) field and expect it to be a 0x2 uchar. If the client sends something other than 0x2 we may want to disconnect the client instead of trying to decode the rest of the stream, as it is probably garbage, or we may choose to drop the data until we find a SFD.

What do you think of these use cases?

Seeming bug when reading from buffer not starting at 0

I'm using gloss from 5cc3fb6

I have this code:

(def edn-codec (gloss/compile-frame
                (gloss/finite-frame :int32
                                    (gloss/string :utf-8))
                pr-str
                #(do (println "Got this:" %) (edn/read-string %))))

(defn encode [data]
  (glio/encode edn-codec data))

(defn decode [buffer-seq]
  (glio/decode edn-codec buffer-seq false))

(def testb (doto (ByteBuffer/allocate 40)
             (.putInt (int 5))
             (.put (byte 34))
             (.put (byte 97))
             (.put (byte 98))
             (.put (byte 99))
             (.put (byte 34))
             (.flip)))

(println "result:" (decode (list testb)))

And it works as expected, it prints:

Got this: "abc"
result: abc

So far, so good. But if I try the following buffer instead

(def testb (doto (ByteBuffer/allocate 40)
             (.put (byte 97))
             (.put (byte 97))
             (.put (byte 97))
             (.put (byte 97))
             (.putInt (int 5))
             (.put (byte 34))
             (.put (byte 97))
             (.put (byte 98))
             (.put (byte 99))
             (.put (byte 34))
             (.flip)
             (.position 4)))

it doesn't work as I expect. I expect it to have the same result as the first buffer, but it doesn't. This is printed:

Got this: \0\0\0�"abc"
result: \0\0\0�

So, it almost works. If I increase the (int 5) to (int 6) it says Insufficient bytes to decode frame., so it reads the 5 and uses it as the length, but then also includes those bytes in the result, before calling the post-decode function. Is that really how it's supposed to work?

Question: Handling prefixes

Hi,

I am trying to decode a frame similar to this one:

{
  :stuff1 :uint16
  :stuff2 :uint16
  :numImages :uint16
  :stuff3 :uint16
  :stuff4 :uint32
  :images (repeated (ordered-map
    :hash :uint32
     :offset :uint32
) :prefix ???)
}

My problem is that the :images frame should repeat :numImages times, but I cannot use the repeated :prefix because :numImages is not directly before the :images. The :stuff3 and :stuff4 must be decoded too.

Whats the recommended way to handle this with Gloss?

Thank you!

repeated :encoding-delimiter

Hi Zach,

I'm using Gloss to decode different types of repeated headers, often following the Internet Message (RFC-5322) style.

Given the simple case of a repeated header in the style "key:value\r\n" where the total section is delimited by "\r\n\r\n", I can decode fine with a codec like:

(defcodec basic-header
  [(string :utf-8 :delimiters [":"]) 
   (string :utf-8 :delimiters ["\r\n" "\r\n\r\n"])])

(defcodec basic-headers
  (repeated basic-header :delimiters ["\r\n\r\n"]
                         :strip-delimiters? false))

i.e. Leave the repeated section delimiter to be consumed by the inner codec.

Decoding is fine, the problem is when encoding Gloss will emit a delimiter for each inner header, "\r\n", then the delimiter for the repeated section "\r\n\r\n", leaving me with one too many "\r\n\r\n"

While I'm not actually encoding headers, I thought it might be a nice addition to be able to specify the encoding-delimiter for a repeated section:

(defcodec basic-headers
  (repeated basic-header :delimiters ["\r\n\r\n"]
                         :encoding-delimiter "\r\n"
                         :strip-delimiters? false))

If this is agreeable I'd be happy to supply a pull-request for you.

More information here:
http://derek.troywest.com/articles/by-example-gloss/#basic-limitations

Thanks,
Derek

gloss in calx, buffer data corruption under 1.3+

Hello.
This is more of a calx question, but I have a suspect gloss may be the issue, so raising here too.
I've made an attempt to migrate calx to clojure 1.3 (and also 1.4). While I have no errors as at start up and when running kernels, the data in calx buffers is getting corrupted and/or is un-retrievable correctly...

Zach, a moment of your time to point me where this could be stemming from would be great.

most concise example of corruption (when under clojure 1.3 or 1.4)
(use 'calx)
(with-cl
(enqueue-read (wrap [1.0 2.0 3.0] :float32)))
Ref to native buffer: (4.6006E-41 4.6007E-41 5.831E-42)

(with-cl
(enqueue-read (wrap [1 2 3] :int32)))
Ref to native buffer: (16777216 33554432 50331648)

I've tried various combination of gloss data types (frames),
I've tried coercing types in the wrap'ed array.
I've tried other versions of gloss... all with no luck.

repo here: https://github.com/LudoTheHUN/calx/tree/calx_on_cloj1.4
commit with edits here (I may be doing the Potemkin thing completely wrong).
LudoTheHUN/calx@082d629

Thanks in advance.

convert-sequence does not accept java byte arrays

I'm using gloss for interaction with an existing binary protocol that requires a lot of hashing. I find myself constantly wrapping my data, java byte arrays "[B", in sequences. Is this intended?

Example:

(defcodec something (ordered-map
                     ...
                     :attr1 (repeat 32 :ubyte)
                     ...))
(def value1 (byte-array (repeat 32 (byte 0))))
(encode something {...
                   :attr1 value1 ;Exception in convert-sequence
                   ...})

How to encode constants

I would like to prefix all encoded frames with a constant string of two bytes that should be discarded on decode. I have determined that (gloss/compile-frame ["AB" :byte]) does the opposite of what I want (it does not encode the prefix "AB", but injects it into the decoded value). How can encode such a constant? It seems clunky to provide pre- and post- decoders to accomplish the task.

Vinyasa is incompatible with gloss

The following project.clj results in the error message

java.lang.IllegalStateException: compile-if already refers to: #'potemkin.collections/compile-if in namespace: potemkin.utils

when I try (require 'gloss.core) in the repl.

(defproject tst2 "0.1.0-SNAPSHOT"
  :description "FIXME: write description"
  :url "http://example.com/FIXME"
  :license {:name "Eclipse Public License"
            :url "http://www.eclipse.org/legal/epl-v10.html"}
  :dependencies [[org.clojure/clojure "1.6.0"]
                 [cider/cider-nrepl "0.8.2" :exclusions [org.clojure/tools.namespace]]
                 [potemkin "0.3.11"]
                 [leiningen #=(leiningen.core.main/leiningen-version)
                  :exclusions [org.codehaus.plexus/plexus-utils
                               cheshire
                               com.fasterxml.jackson.core/jackson-core
                               com.fasterxml.jackson.dataformat/jackson-dataformat-smile
                               org.apache.maven.wagon/wagon-provider-api
                               ]]
                 [im.chit/vinyasa "0.3.0" :exclusions [org.codehaus.plexus/plexus-utils]]
                 [gloss "0.2.4" :exclusions [manifold]]]
  :injections [(require 'gloss.core)]
  :main ^:skip-aot tst2.core
  :target-path "target/%s"
  :profiles {:uberjar {:aot :all}})

lein deps :tree suggests the exclusions. They don't seem to make any difference.

With another project, I'm getting

java.lang.NoClassDefFoundError: Could not initialize class potemkin__init, compiling:(gloss/core.clj:1:1) 

when I try to (require 'gloss.core). I'm hoping whatever's going wrong there will be related to this simple example.

repeated bug

(encode (repeated (string :utf8) :prefix :byte) nil)

throws an exception. This might be wrong because nil might be a valid value for repeated encoding because zero elements to encode should be valid.

Line 128 in codecs might be the culprit. Maybe it could be handled more gracefully earlier.

header with optional body

Hi there, this is some learning on how I have applied gloss to the bitcoin/bitmessage protocol. If you think appropriate, I could paste it up in a wiki page. Is this idiomatic gloss?

The protocol specification for var-int is here. In summary, a var-int is either a :ubyte, or 0xfd and a :uint16, 0xfe and a :uint32 or 0xff and a :uint64. And so my code:

(defcodec var-int-codec 
  (header 
    :ubyte 
    #(case % 
       0xfd (compile-frame :uint16)
       0xfe (compile-frame :uint32)
       0xff (compile-frame :uint64)
            (identity-codec))
    #(cond
       (<  % 0xfd)       %
       (<= % 0xffff)     0xfd
       (<= % 0xffffffff) 0xfe
       :else             0xff)))

Thanks again for the very impressive lib.

[Edited to adopt identity-codec]

Endianness?

Apologies in advance if this isn't the best way to ask this question, but...

I'm playing around[1] with decoding the human genome and I can't figure out a good way to let Gloss know the endianness of the words in the file. I googled around pretty extensively and looked at the Gloss docs and source but haven't found anything better than what I implemented by brute force. I saw something on the Clojure mailing list that indicated you were going to allow endianness to be specified in codecs, but didn't manage to get any further finding out how.

If it's implemented already, please close; if not, consider it a feature request.

Thanks in advance,
John / eigenhombre
[1] https://github.com/eigenhombre/jenome/blob/master/src/jenome/core.clj

Using gloss.io/contiguous with a single non-zero position ByteBuffer loses data.

When using gloss.io/contiguous as a convenience for rolling a sequence of ByteBuffer into a single buffer I've found that:

  • if my sequence contains only a single ByteBuffer
  • that buffer has a non-zero position

then the output loses as many bytes from the end as the position is offset from the start.

e.g.

Two test buffers:

(def buff-a (gi/to-byte-buffer "some text "))
=> (var test/buff-a)
(def buff-b (gi/to-byte-buffer "more, then end!"))
=> (var test/buff-b)
buff-a
=> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=10 cap=10]>
buff-b
=> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=15 cap=15]>

Apply contiguous, result is as expected:

(gi/contiguous [buff-a buff-b])
=> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=25 cap=25]>
(gi/decode (gc/string :utf-8) *1)
=> "some text more, then end!"

Reset the buffers, set position on both, apply contiguous.
Result is as expected:

(def buff-a (gi/to-byte-buffer "some text "))
=> (var test/buff-a)
(def buff-b (gi/to-byte-buffer "more, then end!"))
=> (var test/buff-b)
(.position buff-a 2)
=> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=2 lim=10 cap=10]>
(.position buff-b 4)
=> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=4 lim=15 cap=15]>
(gi/contiguous [buff-a buff-b])
=> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=19 cap=19]>
(gi/decode (gc/string :utf-8) *1)
=> "me text , then end!"

Reset buff-a, set position, apply contiguous.
Result is two bytes short, missing from the end:

(def buff-a (gi/to-byte-buffer "some text"))
=> (var test/buff-a)
(.position buff-a 2)
=> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=2 lim=9 cap=9]>
(gi/contiguous [buff-a])
=> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=5 cap=5]>
(gi/decode (gc/string :utf-8) *1)
=> "me te"

As long as there is a second buffer in the sequence, this isn't an issue:

(def buff-a (gi/to-byte-buffer "some text"))
=> (var test/buff-a)
(.position buff-a 2)
=> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=2 lim=9 cap=9]>
(gi/contiguous [buff-a (gi/to-byte-buffer "")])
=> #<HeapByteBuffer java.nio.HeapByteBuffer[pos=0 lim=7 cap=7]>
(gi/decode (gc/string :utf-8) *1)
=> "me text"

TL:DR; If using gloss.io/contiguous with non-zero position ByteBuffer sequences you might lose data.

When I get a moment I'll see if I can figure out why and raise a PR.

Priority of frames

Is it possible to specify the priority of frames?

I have a really horrible format to decode:

Any number of "commands", then ascii text with no size and no delimiter, then any number of "commands" then ascii text with no size and no delimiter ...

Here's a frame for 1 unit:

(def bridge-frame [(repeated command-frame :prefix :none) {:type :text :value (string :ascii)}])

"Commands" start with 0x02, then one of the characters PNFTDKGER, then possibly some data and end with 0x03

(def command-frame [(enum :byte {:start 2}) (header commands-type {:p p-codec :n n-codec :f f-codec :t t-codec :d d-codec :k k-codec :g g-codec :e e-codec :r r-codec} :type) (enum :byte {:end 3})])

This encodes fine. It also decodes fine provided there is no text string following the series of commands

However decode fails with a "no read-bytes method" if there is a string after a command or series of commands.

I have 2 questions:
(a) any ideas why the decode might be failing?
(b) when I come to decode-all, without any length or delimiter on the string, will the command-frame prioritize so as to pick up an 0x02 start byte, or will this be eaten by the string frame?

Robert

New decode-stream race fix fails tests using `with-profile ci`

@bo-tato Ran into test failures after merging your fix.

The tests seem to succeed consistently when I run lein do clean, test at the shell, and fail pretty consistently when I run lein with-profile ci do clean, test, which is what CircleCI does. You can see an example here. If I do lein with-profile +ci do clean, test, which adds the ci profile to the default list, instead of_replacing_ the default list, it also works, at least on my machine.

Intermittent test failures are the worst. Looking at the ci profile in project.clj, it seems the only differences are the addition of malli (which I was playing with), and setting the JVM target class file version to 1.8. Not sure why ci profile overriding fails, but profile addition succeeds...

Anyway, your help would be appreciated. For now, I can't deploy the new version until we straighten it all out.

bytebuffer decoding

Hello Zach,

I have this error occasionally from a server running aleph 0.2.0 (if happens rarely, sometime once per day, or once per week), when this happens the cpu(s) jumps to 100%

https://gist.github.com/1672548

I cannot point at the root cause, the stacktrace isn't really helpful to pinpoint a piece of my code unfortunately, I am probably doing something wrong but I wanted your advice on this.

I havent been able to try with the 0.2.1 aleph snapshot since the instance were it is running isn't clj-1.3 ready.

Thanks.

Another decode finite-frame error

Gloss 0.2.2:

user> (defcodec testinggg (finite-frame 10 :byte))
user/testinggg
user> (gloss.io/decode testinggg (java.nio.ByteBuffer/wrap
(.getBytes "0123456789")))
AssertionError Assert failed: (empty? b*) gloss.data.bytes/wrap-finite-block/fn--9253/fn--9254 (bytes.clj:77)

Documentation link

Hi,

Two small issues:

  1. The current README link to the API docs at cljdocs.org is broken ("Could not find release for org.clj-commons/gloss").

  2. I found version 0.2.6 on cljdocs but it would be nice to read the latest and greatest if possible.

Thanks

defcodec- does not create private Vars

The Vars created with defcodec- are not private and remain accessible from outside the namespaces they are defined in.

user> (require '[gloss.core :refer :all])
nil
user> (defcodec- foo :byte)
#'user/foo
user> (meta #'foo)
{:line 78, :column 6, :file "*cider-repl localhost*", :name foo, :ns #namespace[user]}

I will create pull request to fix this issue.

deadlock when requiring gloss.io

'lein run' will deadlock at different times if gloss.io is required. E.g. add the require to the default core.clj:

(ns foo.core
(:require [gloss.io])
(:gen-class))

(defn -main
"I don't do a whole lot ... yet."
& args)

Then 'lein run' will deadlock after printing "Hello, World!".

compile-frame with pre-decoder method

Hi Zach,

Currently compile-frame supports pre-encoder and post-decoder methods to manipulate the data-structure before/after use with a codec.

I'm currently using a third scenario - a pre-decoder method which manipulates the ByteBuffer being passed to a codec before decoding.

This is useful to me, because the RFC-5322 spec supports un-folding of headers before parsing, and I do that pre-decode.

A different solution would be to have a look-ahead style String codec which interpreted some byte sequences "\r\n\ " and "'\r\n\t" as simple space characters, but that seems more complicated, particularly as those byte sequences are super-sets of the "\r\n" delimiter.

More information here:
http://derek.troywest.com/articles/by-example-gloss/#gloss-extension

I could just parse my ByteBuffers prior to using the codec, but it seemed useful to append this transform method to the codec itself.

Again, I'm happy to supply a pull-request if you think it suitable.

Ta,
Derek

Adding support for direct byte-buffers

I really like the synergy between byte-streams and gloss, so I found it a bit odd that while byte-streams supports converting to a DirectByteBuffer via the options map, there doesn't seem to be way for gloss to create DirectByteBuffers from its codecs. Perhaps encode could accept an optional option map similar to how byte-streams/convert does?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.