intel / tinycbor Goto Github PK
View Code? Open in Web Editor NEWConcise Binary Object Representation (CBOR) Library
License: MIT License
Concise Binary Object Representation (CBOR) Library
License: MIT License
Concise Binary Object Representation (CBOR) Library --------------------------------------------------- To build TinyCBOR: make If you want to change the compiler or pass extra compiler flags: make CC=clang CFLAGS="-m32 -Oz" LDFLAGS="-m32" Documentation: https://intel.github.io/tinycbor/current/
tinycbor/src/compilersupport_p.h
Line 109 in d072f46
"x" macro parameter should been enclosed in braces for safety:
# define cbor_ntohs(x) (((uint16_t)(x) >> 8) | ((uint16_t)(x) << 8))
CentOS 6
Commands
git clone https://github.com/DaveGamble/cJSON.git
cd cJSON
mkdir build
cd build
cmake ..
make && sudo make install
cd ../..
git clone https://github.com/intel/tinycbor.git
cd tinycbor
make
Error
cc -I./src -std=c99 -Wall -Wextra -c -o tools/json2cbor/json2cbor.o tools/json2cbor/json2cbor.c
tools/json2cbor/json2cbor.c:50: warning: declaration does not declare anything
tools/json2cbor/json2cbor.c: In function ‘parse_meta_data’:
tools/json2cbor/json2cbor.c:209: warning: braces around scalar initializer
tools/json2cbor/json2cbor.c:209: warning: (near initialization for ‘result.t’)
tools/json2cbor/json2cbor.c:209: error: incompatible types when initializing type ‘CborType’ using type ‘void *’
tools/json2cbor/json2cbor.c:209: warning: excess elements in struct initializer
tools/json2cbor/json2cbor.c:209: warning: (near initialization for ‘result’)
tools/json2cbor/json2cbor.c:223: error: ‘struct MetaData’ has no member named ‘simpleType’
tools/json2cbor/json2cbor.c:225: error: ‘struct MetaData’ has no member named ‘v’
tools/json2cbor/json2cbor.c: In function ‘decode_json_with_metadata’:
tools/json2cbor/json2cbor.c:238: error: ‘struct MetaData’ has no member named ‘v’
tools/json2cbor/json2cbor.c:239: error: ‘struct MetaData’ has no member named ‘v’
tools/json2cbor/json2cbor.c:240: error: ‘struct MetaData’ has no member named ‘v’
tools/json2cbor/json2cbor.c:268: error: ‘struct MetaData’ has no member named ‘simpleType’
tools/json2cbor/json2cbor.c:278: error: ‘struct MetaData’ has no member named ‘v’
tools/json2cbor/json2cbor.c:280: error: ‘struct MetaData’ has no member named ‘v’
tools/json2cbor/json2cbor.c:282: error: ‘struct MetaData’ has no member named ‘v’
tools/json2cbor/json2cbor.c:284: error: ‘struct MetaData’ has no member named ‘v’
tools/json2cbor/json2cbor.c:287: error: ‘struct MetaData’ has no member named ‘v’
make: *** [tools/json2cbor/json2cbor.o] Error 1
Hello,
I followed up on the last warning and I'm being told that the behavior is expected because compiler is "smart" to realize that you shift operation would not produce a value that would overflow. Simple repro:
uint64_t v = 123;
int m1 = v >> 31 << 15; //Issues C4244
int m2 = v >> 32 << 15; //doesn't issue C4244 (for any value of "v")
So the warning for the line:
return sign | ((exp + 15) << 10) | mant;
is legit.
It appears the code at examples/simplereader.c is not functional. I checked out the repository (also got the 0.4.1 "release", same problem) and followed these steps:
create small bit of CBOR by converting from JSON using cbor-diag; input JSON: test.json.gz, output: test.cbor.gz
make
cd simplereader
and build: cc ../lib/libtinycbor.a simplereader.c -o simplereader
Run simplereader:
$ ./simplereader test.cbor
Map[
CBOR parsing failure at offset 0: unknown error
but cbordump
works just fine:
$ ../bin/cbordump test.cbor
{"D": "D", "T": 92, "Stuff": [{"Start": true, "End": true}], "MoreStuff": [{"Time": "20160304T123456.2Z", "Count": 2, "Type": "Bad Stuff"}], "someOtherHeader": {"aValue": 42, "myItems": ["towel"], "thePhrase": "Don't Panic"}, "yetAnother": "dummy"}
Might be worth fixing or removing that single example so others do not get lured into "examples" as their introduction (like I did).
Thanks for tinycbor, it looks like the lightweight CBOR implementation I need.
In Makefile, you're using the file descriptor 10, to redirect input to .config:
This fails on my PC. I advise you to use a file descriptor below 10 like 9 as "Redirections using file descriptors greater than 9 should be used with care, as they may conflict with file descriptors the shell uses internally." (cf. https://www.gnu.org/software/bash/manual/html_node/Redirections.html)
Hello,
I'm building the lib in Visual Studio with /W4 switch. I'm getting the warnings below. I understand that in your code you do have checks for data loss during those conversions; however, I have to have code building clean in my environment. I can put together a change to disable the warning in those specific places. Please let me know if that's OK.
X64:
cborparser.c(802): warning C4244: '=': conversion from 'uint64_t' to 'int', possible loss of data
cborparser.c(808): warning C4244: '=': conversion from 'uint64_t' to 'int', possible loss of data
X86:
cbor.h(166): warning C4201: nonstandard extension used: nameless struct/union
cbor.h(364): warning C4244: '=': conversion from 'uint64_t' to 'std::size_t', possible loss of data
cbor.h(419): warning C4244: '=': conversion from 'uint64_t' to 'std::size_t', possible loss of data
cbor.h(431): warning C4244: '=': conversion from 'uint64_t' to 'std::size_t', possible loss of data
Tested in QT 5.12
float halfdata = 23.5;
uint8_t buf[16] = {0};
CborEncoder encoder;
cbor_encoder_init(&encoder, &buf, sizeof(buf), 0);
cbor_encode_half_float(&encoder, &halfdata);
run the demo,the encode data is F9 00 00
cbordump tool from the master is failing to build.
Error:
cc -o ../bin/cbordump cbordump.o cborparser.o cborerrorstrings.o cborpretty.o
cbordump.o: In function `dumpFile':
tinycbor/tools/cbordump/cbordump.c:81: undefined reference to `cbor_value_to_json_advance'
cborpretty.o: In function `cbor_value_dup_byte_string':
tinycbor/tools/../src/cbor.h:387: undefined reference to `_cbor_value_dup_string'
cborpretty.o: In function `value_to_pretty':
tinycbor/tools/../src/cborpretty.c:341: undefined reference to `__fpclassify'
cborpretty.o: In function `cbor_value_dup_text_string':
tinycbor/tools/../src/cbor.h:381: undefined reference to `_cbor_value_dup_string'
collect2: error: ld returned 1 exit status
make: *** [../bin/cbordump] Error 1
So, the following code returns 0 when value = NULL
(OS = Zephyr, HW = nRF52).
int my_func(CborValue *value) {
if (!cbor_value_is_unsigned_integer(value) ||
cbor_value_get_uint64(value, &crc)) {
LOG_ERR("Couldn't parse arg");
return -EINVAL;
}
return 0;
}
This can't be expected behaviour, right?
As of version 0.5.0, cbor.h
includes tinycbor-version.h
. However, make install
does not install the latter file in v0.5.0 and v0.5.1. Example error:
/tinycbor/prefix/include/tinycbor/cbor.h:37:10: fatal error: 'tinycbor-version.h' file not found
#include "tinycbor-version.h"
^~~~~~~~~~~~~~~~~~~~
Proposed Makefile
patch:
23c23,24
< TINYCBOR_HEADERS = src/cbor.h src/cborjson.h
---
> TINYCBOR_HEADERS = src/cbor.h src/cborjson.h \
> src/tinycbor-version.h```
In-memory buffer bounds violation is possible at memstream close. Consider the following code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <malloc.h>
/*
* my system already got open_memstream implementation,
* so, one provided by tinycbor is renamed here and in open_memstream.c
* for clarity
*/
extern FILE *cbor_open_memstream(char **bufptr, size_t *lenptr);
/* home-brewed memory violation checker uses GCC deprecated hooks */
enum {Available = 1024, Canary = 3};
static char malloc_buffer[Available + Canary];
static size_t allocated = 0;
static void* realloc_hook(void* addr, size_t size, const void* caller)
{
if (
((!addr && allocated == 0) || addr == malloc_buffer) &&
size > 0 && size <= Available
)
{
allocated = size;
memcpy(malloc_buffer + allocated, "XYZ", Canary);
return malloc_buffer;
}
return NULL;
}
static void free_hook(void* addr, const void* caller)
{}
static int check_malloc_buffer(const char* pattern, size_t count)
{
size_t i;
for (i = 0; i < count; ++i)
if (*(malloc_buffer + i) != *pattern++)
return i;
return -1;
}
int main(void)
{
char* buffer;
size_t len;
int ret = 0, tainted = 0;
FILE* memstream = cbor_open_memstream(&buffer, &len);
ret = (memstream == NULL);
void* (*old_realloc_hook)(void* , size_t , const void*) = __realloc_hook;
void (*old_free_hook)(void* , const void*) = __free_hook;
__realloc_hook = realloc_hook;
__free_hook = free_hook;
/*
buffer of 4 + 4 / 2 + 1 == 7 bytes is allocated here,
during fflush call; 4 bytes of 7 are used
*/
ret = ret || (fwrite("AAAA", 4, 1, memstream) != 1);
ret = ret || fflush(memstream);
/*
3 bytes are appended to the buffer content;
all 7 bytes of the buffer are used
*/
ret = ret || (fwrite("BBB", 3, 1, memstream) != 1);
ret = ret || fflush(memstream);
/*
one more byte is appended to the buffer content during fclose call,
violating the buffer bounds, rewriting the 1st canary byte
*/
ret = (memstream == NULL) || fclose(memstream) || ret;
__realloc_hook = old_realloc_hook;
__free_hook = old_free_hook;
if (ret)
fprintf(stderr, "fops failed\n");
else if ((tainted = check_malloc_buffer("AAAABBB\0XYZ", 8 + Canary)) != - 1)
fprintf(stderr, "canary tainted @ pos %i\n", tainted);
return ret || (tainted != -1);
}
producing output:
canary tainted @ pos 8
Solution? Simple:
--- a/src/open_memstream.c
+++ b/src/open_memstream.c
@@ -64,7 +64,7 @@ static RetType write_to_buffer(void *cookie, const char *data, LenType len)
if (unlikely(add_check_overflow(*b->len, len, &newsize)))
return -1;
- if (newsize > b->alloc) {
+ if (newsize >= b->alloc) { // NB! one extra byte is needed to avoid buffer overflow at close_buffer
// make room
size_t newalloc = newsize + newsize / 2 + 1; // give 50% more room
ptr = realloc(ptr, newalloc);
In cborpretty.c, there is an inaccurate floating point comparison.
On line 422:
if (ival == fabs(val))
ival is an int64, but fabs returns a float. Testing for equality with two floats with the == operator will rarely return true. Instead you should take the difference of the two and compare that the value of the difference is greater the epsilon.
I think the docs available at https://intel.github.io/tinycbor/current/ need to be resynced from the source. I ran into an issue where the decoding example code is incorrect. The comment responsible for the Doxygen output was fixed in f5a172b, but it seems like this has not yet flowed through to the io pages.
Currently, tinycbor requires that we set the buffer in advance pre-allocated. It would be nice if there was a way to pre-calculate the required size (perhaps pass a NULL buffer, have the library keep incrementing pointers as it goes through, then give a way to get that).
Additionally, it would be a nice feature to have a way AFTER we've finished allocating to determine the ACTUAL consumed size. This would permit a realloc to happen at least. Pointer-subtraction is currently possible, though a function to get this would also be nice/future proof.
Hello, i am evaluating the use of CBOR as serialization format and is now checking this implementation. I made a fuzz testing suite from the example in examples/simplereader.c. I have been running it for 10 hours now without any findings. I made a fork and added a fuzzing suite setup i have been using earlier: https://github.com/sijohans/tinycbor/tree/fuzz-test/fuzz-test
I also was looking at fuzzing the function cbor_value_validate. I am not really sure how the indented use case for that function is. This is how my test case looks like:
bool fuzz_one_input(const uint8_t *data, size_t size)
{
CborParser parser;
CborValue it;
CborError err = cbor_parser_init(data, size, 0, &parser, &it);
if (err == CborNoError) {
err = cbor_value_validate(&it, CborValidateStrictest);
}
return (err == CborNoError);
}
When using it like this, the fuzzing tools will detect some crashes. If feeding this input (hex representation here):
90808080879296808080809a80808064fafafa000700fa000000fa6438300264
And then running it with valgrind i can find some more
$ valgrind ./build_debug/fuzz_validate < output/fuzz_validate/crashes_all/id:000009,sig:06,src:000119+000159,op:splice,rep:8
==6185== Memcheck, a memory error detector
==6185== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==6185== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info
==6185== Command: ./build_debug/fuzz_validate
==6185==
==6185== Invalid read of size 1
==6185== at 0x10B636: _cbor_value_extract_number (cborparser.c:169)
==6185== by 0x10E9F4: validate_number (cborvalidation.c:305)
==6185== by 0x10FB87: validate_value (cborvalidation.c:564)
==6185== by 0x10F4E9: validate_container (cborvalidation.c:468)
==6185== by 0x10FA0E: validate_value (cborvalidation.c:535)
==6185== by 0x10F4E9: validate_container (cborvalidation.c:468)
==6185== by 0x10FA0E: validate_value (cborvalidation.c:535)
==6185== by 0x10F4E9: validate_container (cborvalidation.c:468)
==6185== by 0x10FA0E: validate_value (cborvalidation.c:535)
==6185== by 0x10F4E9: validate_container (cborvalidation.c:468)
==6185== by 0x10FA0E: validate_value (cborvalidation.c:535)
==6185== by 0x10F4E9: validate_container (cborvalidation.c:468)
==6185== Address 0x4ba2060 is 0 bytes after a block of size 32 alloc'd
==6185== at 0x483777F: malloc (vg_replace_malloc.c:299)
==6185== by 0x110111: afl_read (main_entry.c:33)
==6185== by 0x11019D: main (main_entry.c:63)
==6185==
==6185==
==6185== HEAP SUMMARY:
==6185== in use at exit: 0 bytes in 0 blocks
==6185== total heap usage: 6 allocs, 6 frees, 2,356 bytes allocated
==6185==
==6185== All heap blocks were freed -- no leaks are possible
==6185==
==6185== For counts of detected and suppressed errors, rerun with: -v
==6185== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Is that the intended use case of the cbor_value_validate function? What else functions would be a nice approach for fuzz testing?
examples/simplereader.c:74:19: warning: implicit declaration of function 'cbor_value_dup_string' is invalid in C99 [-Wimplicit-function-declaration]
err = cbor_value_dup_string(it, &buf, &n, it);
^
examples/simplereader.c:164:38: warning: passing 'char *' to parameter of type 'const uint8_t *' (aka 'const unsigned char *') converts between pointers to integer types with different sign [-Wpointer-sign]
CborError err = cbor_parser_init(buf, length, 0, &parser, &it);
^~~
examples/../src/cbor.h:223:52: note: passing argument to parameter 'buffer' here
CBOR_API CborError cbor_parser_init(const uint8_t *buffer, size_t size, int flags, CborParser *parser, CborValue *it);
^
examples/simplereader.c:171:24: error: 'const uint8_t *' (aka 'const unsigned char *') and 'char *' are not pointers to compatible types
it.ptr - buf, cbor_error_string(err));
~~~~~~ ^ ~~~
2 warnings and 1 error generated.
Also, it's not very Visual Studio friendly, if at all. I've been able to compile it under studio with some changes, but then your example is unable to dump data from CBOR.
It starts print "Map[" and then blows with unexpected EOF error.
Some constrained systems may require the use of custom allocator like some ports using freeRTOS. So a way of redefining the allocators for _cbor_value_dup_string could be useful.
cbor_value_enter_container and advance_internal assert-fail when they fail to extract a number from the input data. Returning an error code instead would allow for more graceful error handling when dealing with untrusted data.
cbor_parser_init(buffer, 6, 0, &m_parser, &m_it);
cbor_value_is_text_string(&m_it)
cbor_value_calculate_string_length(&m_it, &buflen)
, a CborErrorUnexpectedEOF is returned. The tstr length stored in &buflen is invalid. I would not expect an error here.cbor_value_get_string_length(&m_it, &buflen)
, everything works as expectedcbor_value_calculate_string_length
should work for chunked strings and for non-chunked strings as wellhttps://tools.ietf.org/html/rfc7049 states, that:
Data Stream: A sequence of zero or more data items, not further assembled into a larger containing data item.
But tinycbor library handles data stream with one top-level entity only. The following code have been added to the parser test:
namespace QTest {
template<> char *toString<CborType>(const CborType &type)
{
const char *repr = "other";
switch (type)
{
case CborIntegerType:
repr = "int";
break;
case CborInvalidType:
repr = "invalid";
break;
// not considering other types
}
return qstrdup(repr);
}
}
// skip ...
void tst_Parser::twotoplevel() // have been already added to tst_Parser class definition
{
CborParser parser;
CborValue value;
const char *data = "\x01\x02";
QCOMPARE(cbor_parser_init(reinterpret_cast<const quint8 *>(data), strlen(data), 0, &parser, &value), CborNoError);
int parsed;
QCOMPARE(cbor_value_get_type(&value), CborIntegerType);
QCOMPARE(cbor_value_get_int(&value, &parsed), CborNoError);
QCOMPARE(parsed, 1);
QCOMPARE(cbor_value_advance(&value), CborNoError);
QCOMPARE(cbor_value_get_type(&value), CborIntegerType); // <---- failed here
QCOMPARE(cbor_value_get_int(&value, &parsed), CborNoError);
QCOMPARE(parsed, 2);
}
produces the following failure:
FAIL! : tst_Parser::twotoplevel() Compared values are not the same
Actual (cbor_value_get_type(&value)): invalid
Expected (CborIntegerType) : int
According to RFC 7049, the length field of a definite-length CBOR map denotes the number of entries in the map.
The map's length follows the rules for byte strings (major type 2), except that the length denotes the number of pairs, not the length in bytes that the map takes up.
However, cbor_value_get_map_length(const CborValue *value, size_t *length)
outputs the length integer-divided by 2 (see f0791a2/src/cbor.h#L408). Assuming the intention of this function is to return the number of entries, the result is too low by a factor of 2 (modulo rounding).
As a workaround one can get the correct map size by calling the internal function _cbor_value_extract_int64_helper
directly.
It's uncommon to operate half precision floating point data on application level, while such format is widely used on transport level to minimize traffic. In such case one can need API for encoding/decoding half-precision floating point data AS more common single or double. Something like this:
CborError cbor_encode_float_as_half_float(CborEncoder *encoder, float value);
CborError cbor_encode_double_as_half_float(CborEncoder *encoder, double value);
CborError cbor_value_get_half_float_as_float(const CborValue *value, float *result);
CborError cbor_value_get_half_float_as_double(const CborValue *value, double *result);
// etc.
There is some related stuff in the private area, and I think, I can make it public.
The following commit (d4c9ecb) added the '-e' option to echo in Makefile.configure which tells echo to interpret the backslash escapes. On my environment, this breaks the .config:
make V=1
[...]
if echo -e "extern int open_memstream(); int main() { return open_memstream(); }" |
cc -xc -o /dev/null - ;
then
echo open_memstream-pass := 1 >&9;
fi
:1:1: error: expected identifier or ‘(’ before ‘-’ token
: In function ‘main’:
:1:53: warning: implicit declaration of function ‘open_memstream’ [-Wimplicit-function-declaration]
[...]
I suggest to remove "-e".
Currently, tinycbor checks in Makefile.configure the availability of open_memstream in the toolchain to compile src/open_memstream.c. However, this file is also using fopencookie which is not always available as fopencookie is not a part of any standard (it's a GNU function and open_memstream is POSIX). If GLIBC is defined, the availability of this function should be checked.
Moreover, funopen is tested in Makefile.configure but never used in Makefile. Indeed, funopen is called depending on APPLE being defined, perhaps this test should be removed from Makefile.configure?
The documentation link in the README:
https://01org.github.io/tinycbor/current/
is not found (and github.com/01org/tinycbor no longer exists).
Both cbor_assert macro and standard C assert macro are used in source code with proportion 1:1 roughly. Why? Maybe we should replace assert with cbor_assert for consistency or place a comment explaining this decision?
Hi ,
am using Ubuntu 14.04
and
QMake version 3.0
Using Qt version 5.2.1 in /usr/lib/x86_64-linux-gnu
For some reason encoder test fails with this:
tst_Encoder::arraysAndMaps() Received signal 11
FAIL! : tst_Encoder::arraysAndMaps() Received a fatal error.
Loc: [Unknown file(0)]
Totals: 69 passed, 1 failed, 0 skipped
********* Finished testing of tst_Encoder *********
Aborted
It happens when addArraysAndMaps calls to QTest::newRow("emptymap") << raw("\xa0") << make_map({ });
and the make_map function fails on auto m = Map(list);
What can be a reason for this?
Should I use a specific qt version to avoid this?
Thank you very much.
I propose adding a function which can encode encoded cbor into a CborEncoder.
Lets assume you have a system where chunks of data is encoded as cbor. If you are going to merge these chunks into a new cbor map they have to be decoded and then encoded again into the map. If instead one can just make a map and insert the binary cbor directly into this map then a decoding and reencoding is unneccessary.
CborError cbor_encode_cbor(CborEncoder* encoder, uint8_t* cbor, size_t size);
Regards, Michael
I am using latest version and using "cbor_encode_simple_value()" for encoding "uint8_t".
But when I parse then it is returning blank value with "CborErrorIllegalSimpleType".
Also if I set encode value of uint8_t more than 20 then I get type at the time of parsing as "CborBooleanType" or CborNullType".
How to encode and parse uint8_t using TinyCbor.
When encoding, you start a container with either:
CBOR_API CborError cbor_encoder_create_array(CborEncoder *encoder, CborEncoder *arrayEncoder, size_t length);
CBOR_API CborError cbor_encoder_create_map(CborEncoder *encoder, CborEncoder *mapEncoder, size_t length);
The 'length' parameter can either be CborIndefiniteLength, or the actual value. When that length parameter is specified, it would be nice to have an assert (either on add, or on close?) that would confirm that the correct amount of items were written. It becomes pretty easy to forget to change the length when doing development, and it would be great to catch this rather than the otherwise unfortunate behaviors (such as failed finds in a map).
I'm using tinycbor in a ARM project that uses -Wall -Werror -Os
. Here is a minimal example that shows the warning that I'm getting:
$ arm-none-eabi-gcc -Wall -Werror -Os -c cborparser.c
cborparser.c: In function 'iterate_string_chunks':
cborparser.c:563:27: error: 'chunkLen' may be used uninitialized in this function [-Werror=maybe-uninitialized]
*result = func(buffer + total, ptr, chunkLen);
^
cborparser.c:527:23: error: 'total' may be used uninitialized in this function [-Werror=maybe-uninitialized]
*result = func(buffer, ptr, total);
^
cc1: all warnings being treated as errors
I understand this is totally GCC's fault. size_t total and size_t chunkLen are not used if extract_length()
returns an error. I could suppress the warning, but I would like to just use tinycbor in the rest of my project without making a special case for it.
I considered just initializing the variables, but in this case I believe it's better to just set the length to 0 in extract_length()
when there is an error, which shouldn't happen too often and solves my problem. I'll send a patch request in a moment that does that.
-- I am creating a new request as the previous question about 'multiple arrays' was closed.
Thanks for the reply. How about a cbor_encode_merge() function? It is essentially the same thing as you proposed. However this will be more generic. Sort of a strcat function; we can have two arrays managed separately in their own streams and can be merged later.
Let's assume stream 1 is open and stream 2 is closed, I would like to call something like
cbor_encode_merge(stream1, stream2)
which can copy the map from stream2 in to stream1's tail.
I have a few use cases where I have a CBOR stream with a rather large byte strings. Having a function to get a pointer and length to these byte string would help saving some memory as a buffer to temporarily copy the data to is not required. Something like
CborError cbor_value_get_byte_string(CborValue *value, uint8_t **result, size_t *len);
would be perfect. This could return an CborErrorUnknownLength
when a chunked string is passed.
My main use case is a COSE library where I want to retrieve the payload without having to copy the to another buffer. Simply accessing it with a pointer is preferred to save memory on a constraint device.
I'm willing to contribute the code myself, but I need a few pointers on how to get the pointer to the actual data.
Should test that it compiles with Arduino, as it's a very weird environment (compiles as C++98, but without the C++ headers).
See https://gerrit.iotivity.org/gerrit/gitweb?p=ci-management.git;a=blob;f=packer/scripts/arduino.sh;h=0f23a509b9a33b1b4e502dc31c61cf7bf05a2289;hb=HEAD for some instructions.
Can I have multiple arrays at the same time?
For instance does the following sequence work?
cbor_encoder_create_map(&stream, &map, 2);
// Create first array.
cbor_encode_text_stringz(&map, "array1");
cbor_encoder_create_array(&map, &array1, CborIndefiniteLength);
// Create a second array.
cbor_encode_text_stringz(&map, "array2");
cbor_encoder_create_array(&map, &array2, CborIndefiniteLength);
cbor_encode_int(&array1, data1);
cbor_encode_int(&array2, data2); // this overwrites array1.
Are there other ways to do this? Can you provide an example on how to manage two arrays within one cbor stream?
Thanks for accepting the change on unnamed union.
There is another issue with tinycbor on buildroot. On some architectures, it seems that libtinycbor.a is created as a directory. You can find more information here: https://patchwork.ozlabs.org/patch/652187/. II have not found the issue yet.
There is juste this line which seems at bit strange to me: "test -d
Did you already have this issue with libtinycbor.a created as a directory?
When dumping cbor the function utf8EscapedDump can over read the input buffer for some inputs. To reproduce the issue, find the attached zip file, unzip it, then run
valgrind bin/cbordump repro.cbor
which should produce something like:
==30879== Invalid read of size 1
==30879== at 0x402B20: utf8EscapedDump (in bin/cbordump)
==30879== by 0x4032C4: value_to_pretty (in bin/cbordump)
==30879== by 0x402E58: container_to_pretty (in bin/cbordump)
==30879== by 0x403068: value_to_pretty (in bin/cbordump)
==30879== by 0x403744: cbor_value_to_pretty_advance (in bin/cbordump)
==30879== by 0x400FC9: dumpFile (in /bin/cbordump)
==30879== by 0x40116F: main (in bin/cbordump)
I tested this on Ubuntu 16.04.1 LTS, HEAD was at 863a480dc4e61ce35371c4d0db17be14c9a68125
I think the reason is that on lines 195, 203 and 211 of src/cborpretty.c a character of input is consumed, but the count of characters (the variable n) is not decremented.
Zephyr is important.
Hi,
Is it possible to parse and and emit a CBOR document in parts? The use case here is where I might have a CBOR file that's in the order of 2kB in size, but I don't want to allocate a 2kB buffer to read the whole thing in one go (I've allocated 4kB for a stack and don't have a lot of heap space available).
The thinking is when reading, I might allocate a 256 or 512-byte buffer (on the stack), read the first buffer-full worth of data, then call the parser on that. As I get near the end, I watch for CborErrorAdvancePastEOF
… when this happens, I obtain a copy of TinyCBOR's read pointer, do a memmove
to move the unread data to the start of my buffer, read
some more in, then tell TinyCBOR to move back to the start of the buffer to resume reading whatever CBOR value was being pointed to at the time.
Likewise writing; when CborErrorOutOfMemory
is encountered, I can do a write
of the buffer, then tell TinyCBOR to continue from the start of the buffer and resume writing the output.
Such a feature would really work well with CoAP block-wise transfers, as the data could be effectively "streamed" instead of having to buffer the lot.
I tried looking around for whether there was a flag I could specify on the decoder, but couldn't see any flags in the documentation.
Regards,
Stuart Longland
Consider the following code snippet:
int main(void)
{
// NB! The 2-nd byte in buffer is a canary
const uint8_t buffer[] = {0xC0 | 0x1F, 0x80 | 0x3F};
const uint8_t *begin = buffer;
// NB! data size provided to get_utf8 function is 1 byte
uint32_t result = get_utf8(&begin, begin + 1);
printf("buffer ptr: %p, begin ptr: %p, return value: %x\n",
buffer, begin, result);
return 0;
}
the actual output is:
buffer ptr: 0x7ffd07e9d700, begin ptr: 0x7ffd07e9d702, return value: 7ff
the expected output is:
buffer ptr: 0x7ffd07e9d700, begin ptr: 0x7ffd07e9d701, return value: ffffffff
Solution:
diff --git a/src/utf8_p.h b/src/utf8_p.h
index 577e540..ca43835 100644
--- a/src/utf8_p.h
+++ b/src/utf8_p.h
@@ -66,7 +66,7 @@ static inline uint32_t get_utf8(const uint8_t **buffer, const uint8_t *end)
return ~0U;
}
- if (n < charsNeeded - 1)
+ if (n < charsNeeded)
return ~0U;
/* first continuation character */
Have a nice day!
Hi,
the code bellow causes that cbor_encoder_create_array returns CborErrorOutOfMemory.
CborEncoder enc, array_enc;
u8_t buf[16];
cbor_encoder_init(&enc, buf, sizeof(buf), 0);
int res = 0;
res = cbor_encoder_create_array(&enc, &array_enc, 1);
How to create a CBOR array? Can you point me to some example. Thank you!
Expected: some function will return an OOM error, extra_bytes_needed == 1
Actual: no errors are reported, extra_bytes_needed == 1
The workaround is to check extra_bytes_needed instead of relying on the error codes.
create_container
explicitly ignores OOM errors https://github.com/01org/tinycbor/blob/9ba4791ad5b85f3229211b1605caa3c868a4d6f9/src/cborencoder.c#L469
so I guess there must be a good reason for it.
I'm attempting to write a parser, and it seems that the cbor_value_leave_container is only allowed when you're at the end of the container. I would prefer to be able to leave the container at any time, so that I only need to pick up the necessary data in an array or map.
The IoTivity Arduino build fails against the latest version of tinycbor
(4e9626c)
$ extlibs/arduino/arduino-1.5.8/hardware/tools/avr/bin/avr-g++ --version
avr-g++ (GCC) 4.8.1
avr-g++ -o extlibs/tinycbor/tinycbor/src/cborencoder.o -c -c -g -Os -w -fno-exceptions -ffunction-sections -fdata-sections -fno-threadsafe-statics -MMD -Wall -Os -mmcu=atmega2560 -DF_CPU=16000000L -DARDUINO=158 -DARDUINO_AVR_MEGA2560 -std=c99 -DARDUINO_ARCH_AVR -DNDEBUG -DWITH_ARDUINO -D__ARDUINO__ -D__OIC_DEVICE_NAME__='"OIC-DEVICE"' -DNO_EDR_ADAPTER -DNO_LE_ADAPTER -DIP_ADAPTER -DNO_TCP_ADAPTER -DROUTING_EP -DSINGLE_THREAD -Iout/arduino/avr/release/extlibs/cjson -Iextlibs/cjson -Iout/arduino/avr/release/extlibs/timer -Iextlibs/timer -Iout/arduino/avr/release/resource/csdk/logger/include -Iresource/csdk/logger/include -Iout/arduino/avr/release/resource/csdk/ocrandom/include -Iresource/csdk/ocrandom/include -Iout/arduino/avr/release/resource/csdk/stack/include -Iresource/csdk/stack/include -Iout/arduino/avr/release/resource/csdk/stack/include/internal -Iresource/csdk/stack/include/internal -Iout/arduino/avr/release/resource/oc_logger/include -Iresource/oc_logger/include -Iout/arduino/avr/release/resource/csdk/connectivity/lib/libcoap-4.1.1 -Iresource/csdk/connectivity/lib/libcoap-4.1.1 -Iout/arduino/avr/release/resource/csdk/connectivity/inc -Iresource/csdk/connectivity/inc -Iout/arduino/avr/release/resource/csdk/connectivity/api -Iresource/csdk/connectivity/api -Iout/arduino/avr/release/resource/csdk/connectivity/external/inc -Iresource/csdk/connectivity/external/inc -Iout/arduino/avr/release/resource/csdk/security/include -Iresource/csdk/security/include -Iout/arduino/avr/release/resource/csdk/security/include/internal -Iresource/csdk/security/include/internal -Iout/arduino/avr/release/resource/api -Iresource/api -Ideps/arduino/include -Iextlibs/tinycbor/tinycbor/src -Iextlibs/arduino/arduino-1.5.8/hardware/arduino/avr/variants/mega -Iextlibs/arduino/arduino-1.5.8/hardware/arduino/avr/cores/arduino -Iextlibs/arduino/arduino-1.5.8/hardware/arduino/avr/libraries/SPI -Iextlibs/arduino/arduino-1.5.8/libraries/Ethernet/src -Iextlibs/arduino/arduino-1.5.8/libraries/Ethernet/src/utility -Iextlibs/arduino/arduino-1.5.8/libraries/Time/Time -Iout/arduino/avr/release/resource/c_common -Iresource/c_common -Iout/arduino/avr/release/resource/c_common/oic_malloc/include -Iresource/c_common/oic_malloc/include -Iout/arduino/avr/release/resource/c_common/oic_string/include -Iresource/c_common/oic_string/include -Iout/arduino/avr/release/resource/inc -Iresource/inc -Iout/arduino/avr/release/resource/lib/libcoap-4.1.1 -Iresource/lib/libcoap-4.1.1 -Iout/arduino/avr/release/resource/common/inc -Iresource/common/inc -Iout/arduino/avr/release/resource/csdk/common/inc -Iresource/csdk/common/inc -Iout/arduino/avr/release/resource/csdk/ip_adapter/arduino -Iresource/csdk/ip_adapter/arduino -Iout/arduino/avr/release/resource/csdk/routing/include -Iresource/csdk/routing/include extlibs/tinycbor/tinycbor/src/cborencoder.c
In file included from extlibs/tinycbor/tinycbor/src/cborencoder.c:28:0:
extlibs/tinycbor/tinycbor/src/cborencoder.c: In function 'CborError create_container(CborEncoder*, CborEncoder*, size_t, uint8_t)':
extlibs/tinycbor/tinycbor/src/compilersupport_p.h:41:62: error: types may not be defined in 'sizeof' expressions
# define cbor_static_assert(x) ((void)sizeof(struct { int m : 2*!!(x) - 1; }))
^
extlibs/tinycbor/tinycbor/src/cborencoder.c:235:5: note: in expansion of macro 'cbor_static_assert'
cbor_static_assert(((MapType << MajorTypeShift) & CborIteratorFlag_ContainerIsMap) == CborIteratorFlag_ContainerIsMap);
^
extlibs/tinycbor/tinycbor/src/compilersupport_p.h:41:62: error: types may not be defined in 'sizeof' expressions
# define cbor_static_assert(x) ((void)sizeof(struct { int m : 2*!!(x) - 1; }))
^
extlibs/tinycbor/tinycbor/src/cborencoder.c:236:5: note: in expansion of macro 'cbor_static_assert'
cbor_static_assert(((ArrayType << MajorTypeShift) & CborIteratorFlag_ContainerIsMap) == 0);
^
scons: *** [extlibs/tinycbor/tinycbor/src/cborencoder.o] Error 1
Please let me know if you need more information for confirming or diagnosing this!
Building tinycbor on my Windows 7 machine using MinGW GCC-6.3.0-1 (g++ compiler) caused a linker error:
lib/tinycbor/libtinycbor.a(cborencoder.c.obj): In function create_container:
cborencoder.c:460: undefined reference to static_assert
cborencoder.c:461: undefined reference to static_assert
collect2.exe: error: ld returned 1 exit status
It seems to me that your #if term in compilersupport_p.h(53) is not working in my case:
#if __STDC_VERSION__ >= 201112L || __cplusplus >= 201103L || __cpp_static_assert >= 200410
# define cbor_static_assert(x) static_assert(x, #x)
#elif !defined(__cplusplus) && defined(__GNUC__) && (__GNUC__ * 100 + __GNUC_MINOR__ >= 406) && (__STDC_VERSION__ > 199901L)
# define cbor_static_assert(x) _Static_assert(x, #x)
#else
# define cbor_static_assert(x) ((void)sizeof(char[2*!!(x) - 1]))
#endif
The first term __STDC_VERSION__ >= 201112L
is true in my case, but I don't have the static_assert.
Can the first term be removed?
Otherwise: I found a quite similar term used in the RapidJSON library:
https://github.com/Tencent/rapidjson/blob/a7f687fdf812b9f9c7bf7f78eeb1261642f31623/include/rapidjson/rapidjson.h#L419
#if __cplusplus >= 201103L || ( defined(_MSC_VER) && _MSC_VER >= 1800 )
#define RAPIDJSON_STATIC_ASSERT(x) \
static_assert(x, RAPIDJSON_STRINGIFY(x))
#endif // C++11
This might be an alternative.
I noticed the following 2 signatures:
CBOR_API CborError cbor_encode_float(CborEncoder *encoder, const float *value);
CBOR_API CborError cbor_encode_double(CborEncoder *encoder, const double *value);
Why would these take pointers to a float/double rather than the float/double themselves? This is fairly inconsistent with the implementation of cbor_encode_int.
It seems that cjson will soon have its first and official new release (DaveGamble/cJSON#41).
cjson has now make or/and cmake files that install the cJSON.h header into the cjson subdirectory (https://github.com/DaveGamble/cJSON/blob/master/CMakeLists.txt). As soon as this version is released, I would suggest to update the include in Makefile.configure and json2cbor.c. Do you agree?
Do you think it's the good time for a new release of tinycbor? I sent a patch to the soletta project to update the CborEncoder structure (solettaproject/soletta#2347) but they would like a new tag release.
A top-level LICENSE file will simplify the contribution process.
If version is 0.4.0, the VERSION file in the root should reflect that. Currently the 0.4.0 release was tagged while the VERSION file says 0.3.2.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.