Giter Site home page Giter Site logo

mlcommons / tiny Goto Github PK

View Code? Open in Web Editor NEW
337.0 36.0 80.0 81.21 MB

MLPerf™ Tiny is an ML benchmark suite for extremely low-power systems such as microcontrollers

Home Page: https://mlcommons.org/en/groups/inference-tiny/

License: Apache License 2.0

Makefile 0.01% C++ 22.48% Shell 0.04% C 74.95% Python 1.13% Assembly 0.55% HTML 0.07% CSS 0.78%

tiny's Introduction

MLPerf™ Tiny Deep Learning Benchmarks for Embedded Devices

The goal of MLPerf Tiny is to provide a representative set of deep neural nets and benchmarking code to compare performance between embedded devices. Embedded devices include microcontrollers, DSPs, and tiny NN accelerators. These devices typically run at between 10MHz and 250MHz, and can perform inference using less then 50mW of power.

MLPerf Tiny submissions will allow device makers and researchers to choose the best hardware for their use case, and allows hardware and software vendors to showcase their offerings.

The reference benchmarks are provided using TensorFlow Lite for Microcontrollers (TFLM). Submitters can directly use the TFLM, although submitters are encouraged to use the software stack that works best on their hardware.

For the current version of the benchmark under development, please see the benchmark folder.

The deadline of the next submission round v1.2 is expected to be March 15, 2024, with publication in April (dates not yet finalized).

Results of previous versions are available on the MLCommons web page (change between version using the table headers). The code of previous versions and detailed submissions are available as in the table below:

Version Code Repository Release Date Results Repository
v0.5 https://github.com/mlcommons/tiny/tree/v0.5 Jun 16, 2021 https://github.com/mlcommons/tiny_results_v0.5
v0.7 https://github.com/mlcommons/tiny/tree/v0.7 April 6, 2022 https://github.com/mlcommons/tiny_results_v0.7
v1.0 https://github.com/mlcommons/tiny/tree/v1.0 Nov 9, 2022 https://github.com/mlcommons/tiny_results_v1.0
v1.1 https://github.com/mlcommons/tiny/tree/v1.1 Jun 27, 2023 https://github.com/mlcommons/tiny_results_v1.1

Please see the MLPerf Tiny Benchmark paper for a detailed description of the motivation and guiding principles behind the benchmark suite. If you use any part of this benchmark (e.g., reference implementations, submissions, etc.) in academic work, please cite the following:

@article{banbury2021mlperf,
  title={MLPerf Tiny Benchmark},
  author={Banbury, Colby and Reddi, Vijay Janapa and Torelli, Peter and Holleman, Jeremy and Jeffries, Nat and Kiraly, Csaba and Montino, Pietro and Kanter, David and Ahmed, Sebastian and Pau, Danilo and others},
  journal={Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks},
  year={2021}
}

Join the working group here: https://groups.google.com/a/mlcommons.org/g/tiny

tiny's People

Contributors

andreysher avatar colbybanbury avatar cskiraly avatar guschmue avatar ishotjr avatar jeremy-syn avatar jmduarte avatar maltanar avatar morphine00 avatar nathanw-mlc avatar njeffrie avatar petertorelli avatar pgmpablo157321 avatar piemonty avatar profvjreddi avatar sreckamp avatar thekanter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tiny's Issues

Sample models

Hi I was wondering if you could tell me some of the model architectures you are considering for the benchmark.
If not, could you give me an ETA for when the benchmark will be released?

need clarification on the pre-processing rules in open division

Hi WG,
The pre-processing rules in open division is kind of ambiguous.
I'd like to know if the resized image size in included in pre-processing? So that can it be changed in open division? For example in visual wake words, the input image is resized to 96x96, can we change to 128x128 or 48x48 in open division?

addQuantize and addDequantize in th_final_initialize of anamoly detection submitter_implemented.cpp

Would it be possible to receive an explanation on towards why these functions are being used in the th_final_initialize function. From what I can see, they only run a check and are never used anymore. I ask because I am having some trouble importing the files needed for them to work onto my project. The issue being that adding all the source files needed for those functions makes me go over my memory requirements.

submitter_implemented_question

Basically, are these functions/checks essential to the project or would I be able to leave them out?

EEMBC runner input binary files

Hi,
I am struggling to get the input binaries required for EEMBC energy runner. For image classification model, after training, the binary files got stored inside perf_samples folder. But when I am trying to run the EEMBC, the binary files it requires, they are not present amongst those 200 bin files for performance evaluation. Can you please provide a link to download the dataset containing the input binary files?

Testing the interface between tiny/v0.1 and eembc

Hi, I want to test how the eembc benchmark runner interfaces with the tiny v0.1, and wanted to do that by deploying one of the reference submissions locally, on a desktop and testing it with the eembc runner. Is there a reference implementation for that somewhere?

So far I've only found examples that must be compiled with MBED and deployed to dedicated hardware. I'm also unaware if these examples are ready-to-go, or if they require additional work like creating/compiling a model a certain way before testing it.

User guide for submitter/internally implemented was never published

Hi Guys,
Way back last year I wrote a large section in this document that explained the test harness. This was never published and the comments were removed when the code was re-written from the EEMBC layout to the new "submitter/internally implemented" structure. I've seen some bugs and questions related to lack of documentation, and I think would help to add this as a user-guide and update it to match the new code structure Nat implemented.

Peter

Inconsistency with bias enabled in the pytorch model (Conv2d) for image classification

Hello,

I am interested in running the image classification model to benchmark our accelerator and, currently, my environment is in pytorch therefore, I had a look at your experimental model under:
/tiny/benchmark/experimental/training_torch/image_classification/utils/model.py

The model contains ResNetBlock with two Conv2d convolutions followed per Batch Normalization. However, each Conv2d layer is configured with the bias enabled (bias=True) which is inconsistent with the Con2d layers in the Keras model that don't have the use_bias flag enabled (also it is not coherent with the purpose of batch normalization layer that follows).

Thank you,
Best regards,
Jean-Baptiste

Environment setup for Streaming WakeWord

I'm having issues setting up the environment for running the new streaming wakeword demo, and I was wondering if you would mind sending some instructions @jeremy-syn please? Even just something like pip freeze > requirements.txt would be great.

Also, thanks for all the work on the benchmark currently!

Where is the Visual Wake Word test set?

I would like to evaluate the pretrained MobileNet model on the preprocessed COCO2014 test set, but I am not able to find this preprcessed test set anywhere in the repo. Where can I find it? For the other three datasets (AD, IC, KS) it has been already provided in the repo.

I suspect I have to generate it by myself using this script setting dataType='test2014', because this should be the same script that has been used to create the training+validation dataset that is used for the training and that can be downloaded here.

Moreover, the paper entitled "MLPerf Tiny Benchmark" mentions the presence of this test set for the VWW problem at paragraph 4.1.

Finally, why is there no test.py (or evaluated.py) script to run the model on the test set, while for all the other three datasets (AD, IC, KS) there are such scripts?

Thank you,
Regards,
Luca Urbinati

Loss is negative and accuracy=0.006 when tried to prune anamoly detection

I have used Tensorflow optimization toolkit to prune the benchmark anamoly_detection.

The procedure I have followed is same as shown in the below link.
https://www.tensorflow.org/model_optimization/guide/combine/pqat_example

The output during training is like this:
Epoch 2/100
2412/2412 [==============================] - 20s 8ms/step - loss: 11.1539 - val_loss: 11.1307
Epoch 3/100
2412/2412 [==============================] - 20s 8ms/step - loss: 10.6982 - val_loss: 10.6691
Epoch 4/100
2412/2412 [==============================] - 20s 8ms/step - loss: 10.4117 - val_loss: 10.5804
Epoch 5/100
2412/2412 [==============================] - 20s 8ms/step - loss: 10.2858 - val_loss: 10.2876
Epoch 6/100
2412/2412 [==============================] - 20s 8ms/step - loss: 10.1822 - val_loss: 10.2884
Epoch 7/100
2412/2412 [==============================] - 20s 8ms/step - loss: 10.1250 - val_loss: 10.2690
Epoch 8/100
2412/2412 [==============================] - 20s 8ms/step - loss: 10.0805 - val_loss: 10.3325

Output during pruning is like this:
Epoch 1/100
2680/2680 [==============================] - 47s 16ms/step - loss: -291053.6562 - accuracy: 0.0359
Epoch 2/100
2680/2680 [==============================] - 42s 16ms/step - loss: -287242.3438 - accuracy: 0.0335
Epoch 3/100
2680/2680 [==============================] - 42s 16ms/step - loss: -294022.2500 - accuracy: 0.0341
Epoch 4/100
2680/2680 [==============================] - 41s 15ms/step - loss: -301931.7188 - accuracy: 0.0336
Epoch 5/100
2680/2680 [==============================] - 41s 15ms/step - loss: -311050.6875 - accuracy: 0.0294
Epoch 6/100
2680/2680 [==============================] - 42s 15ms/step - loss: -321004.3750 - accuracy: 0.0241
Epoch 7/100
2680/2680 [==============================] - 41s 15ms/step - loss: -331941.9375 - accuracy: 0.0177
Epoch 8/100
2680/2680 [==============================] - 42s 16ms/step - loss: -343348.4688 - accuracy: 0.0109
Epoch 9/100
2680/2680 [==============================] - 42s 16ms/step - loss: -355611.3438 - accuracy: 0.0080
Epoch 10/100
2680/2680 [==============================] - 42s 16ms/step - loss: -368586.7812 - accuracy: 0.0073
Epoch 11/100
2680/2680 [==============================] - 42s 16ms/step - loss: -382228.8125 - accuracy: 0.0068
Epoch 12/100
2680/2680 [==============================] - 42s 16ms/step - loss: -396485.6875 - accuracy: 0.0066
Epoch 13/100
2680/2680 [==============================] - 42s 16ms/step - loss: -411286.4062 - accuracy: 0.0065
Epoch 14/100
2680/2680 [==============================] - 41s 15ms/step - loss: -426653.8750 - accuracy: 0.0061
Epoch 15/100
2680/2680 [==============================] - 41s 15ms/step - loss: -442495.9062 - accuracy: 0.0056
Epoch 16/100
2680/2680 [==============================] - 41s 15ms/step - loss: -458785.0625 - accuracy: 0.0049

Can you help me with this issue?

keyword spotting: linker error

Hello,
I'm trying to deploy on my board, while compiling, i get a linker error on line 102
static tflite::MicroModelRunner<int8_t, int8_t, 6> model_runner( g_kws_model_data, resolver, tensor_arena, kTensorArenaSize); runner = &model_runner; }
g_kws_model_data undefined symbol, this is under benchmark/reference_submissions/keyword_spotting/submitter_implemented.cpp

Model from visual wake word does not work for Harvard tinyMLx course arduino code

I am trying to modify the benchmark training code to create a custom model for person detection.

Just to check for sanity, I was trying to see whether I can port the already generated model to arduino code.

I have used the vww_model.cc file and the corresponding settings file to replace the model files in the arduino code developed for the course https://www.edx.org/professional-certificate/harvardx-tiny-machine-learning

I have used the board from here: https://store-usa.arduino.cc/products/arduino-tiny-machine-learning-kit

We have arduino code already available for this course.
This code already has model file. So I replace it with the model file available in this repo and also change the settings files.
The output of the code always shows no person, even when I change the position of the camera.

Can anybody let me know whether this model can be directly used to replace the model files in other repo.

'image_classification' returned "Too few timestamps found for performance check" by EEMBC Benchmark Runner

Hi,

I compiled image_classification in reference_submission and copied the bin file to NUMAKER_IOT_M487.

Then, I tried to run EEMBC Benchmark Runner with script ML Performance 1.0.0.

I got message

ulp-mlperf: Too few timestamps found for performance check.
ulp-mlperf: Cannot compute accuracy metrics in single-run mode.

Is there anything that I missed to set in order to run the test correctly on an Mbed supported board? Thanks!

Totally off accuracies for anomaly detection with quantized I/O model

Hello,

I am trying to run inference for the Anomaly Detection benchmark against the model with weights, activations, inputs, and outputs quantized. I am getting totally off results for the average AUC.

I changed nothing but the input handling before inference as the data have to be scaled down and converted to np.int8 (just like other benchmarks). Here's the code for that:

def run_inference(model_path, data):
    interpreter = tf.lite.Interpreter(model_path=model_path)
    interpreter.allocate_tensors()

    input_details = interpreter.get_input_details()
    input_scale, zero_point = input_details[0]['quantization']
    input_data = numpy.array(data/input_scale + zero_point, dtype=numpy.int8)            # Just like other benchmarks

    output_details = interpreter.get_output_details()
    output_data = numpy.empty_like(data)

    for i in range(input_data.shape[0]):
        interpreter.set_tensor(input_details[0]['index'], input_data[i:i+1, :])
        interpreter.invoke()
        output_data[i:i+1, :] = interpreter.get_tensor(output_details[0]['index'])

    return output_data

The data parameter comes from the untouched inference code in 03_tflite_test.py and model_path is trained_models/model_ToyCar_quant_fullint_micro_intio.tflite.

The average AUC is 0.5564.

The same exact code (without re-scaling the input data type) works for the trained_models/model_ToyCar_quant_fullint_micro.tflite model.

I tried to scale the input representative dataset using the following code in the conversion script:

def representative_dataset_gen():
    for sample in train_data[::5]:
        sample = numpy.expand_dims(sample.astype(numpy.float32), axis=0)
        sample = sample / numpy.max(numpy.abs(sample), axis=0)
        yield [sample]

However, this makes the average AUC even worse: 0.4605.

Any hints would be appreciated,
Thanks

Measure performance and accuracy on customized dataset

Hi,
Is there any way to measure performance and accuracy on customized dataset (other than ad, vww, kws, ic) ?
I have tried setting EE_MODEL_VERSION to my own dataset and put the dataset under /ulp-mlperf/datasets/ directory, but I got the following error when running medium performance:

Need at least 5 inputs to run benchmark mode

Anomaly detection - AUC for quantized model with INT8 input/output

Hi,

Regarding the anomaly detection, what is the expected AUC for the quantized model with INT8 input/output (ad01_int8.tflite or model_ToyCar_quant_fullint_micro_intio.tflite)? I am getting an AUC of 0.354 for the machine ID 0, which is very low when compared to the AUC for the quantized model, but with float input/output (model_ToyCar_quant_fullint_micro.tflite).

fatal error: 'model.h' file not found

Hi
I'm trying to build tiny/v0.1 and "make" causes an error:
example_submission/model.cc:15:10: fatal error: 'model.h' file not found

Please find a more detailes below.
Is this error expected or caused by a user error?
Thanks, Thomas

Host info:
Mac OS 10.15.7
g++ --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/4.2.1
Apple clang version 12.0.0 (clang-1200.0.32.29)
Target: x86_64-apple-darwin19.6.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin

Detailed trace:
inflating: tensorflow-master/third_party/vulkan_headers/workspace.bzl
inflating: tensorflow-master/third_party/wrapt.BUILD
inflating: tensorflow-master/third_party/zlib.BUILD
creating: tensorflow-master/tools/
inflating: tensorflow-master/tools/tf_env_collect.sh
finishing deferred symbolic links:
tensorflow-master/.pylintrc -> tensorflow/tools/ci_build/pylintrc
mv: rename tensorflow-master/tensorflow to ./tensorflow: Directory not empty
tensorflow/lite/micro/tools/make/Makefile:17: *** "Requires make version 3.82 or later (current is 3.81)". Stop.
tensorflow/lite/micro/tools/make/Makefile:17: *** "Requires make version 3.82 or later (current is 3.81)". Stop.
mv: rename tensorflow/lite/micro/tools/make/gen/*/lib/libtensorflow-microlite.a to ./libtensorflow-microlite.a: No such file or directory
'g++' -DTF_LITE_STATIC_MEMORY -DNDEBUG -O3 -DTF_LITE_DISABLE_X86_NEON -Iexample_submission -Iexample_submission/third_party/gemmlowp -Iexample_submission/tensorflow/lite/micro/tools/make/downloads/flatbuffers/include -Iexample_submission/third_party/ruy -std=c++11 -I. -c example_submission/model.cc -o example_submission/model.o
example_submission/model.cc:15:10: fatal error: 'model.h' file not found
#include "model.h"
^~~~~~~~~
1 error generated.
make: *** [example_submission/model.o] Error 1

Anomaly Detection get_datasets.sh not working

When running the get_datasets.sh, the following error message is printed:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   271  100   271    0     0   4672      0 --:--:-- --:--:-- --:--:--  4754
Archive:  dev_data_ToyCar.zip
  End-of-central-directory signature not found.  Either this file is not
  a zipfile, or it constitutes one disk of a multi-part archive.  In the
  latter case the central directory and zipfile comment will be found on
  the last disk(s) of this archive.
unzip:  cannot find zipfile directory in one of dev_data_ToyCar.zip or
        dev_data_ToyCar.zip.zip, and cannot find dev_data_ToyCar.zip.ZIP, period.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   285  100   285    0     0   7500      0 --:--:-- --:--:-- --:--:--  7500
Archive:  dev_data_ToyCar.zip
  End-of-central-directory signature not found.  Either this file is not
  a zipfile, or it constitutes one disk of a multi-part archive.  In the
  latter case the central directory and zipfile comment will be found on
  the last disk(s) of this archive.
unzip:  cannot find zipfile directory in one of dev_data_ToyCar.zip or
        dev_data_ToyCar.zip.zip, and cannot find dev_data_ToyCar.zip.ZIP, period.

Ergo, I am currently unable to run the models. Could you please help? Cheers!

Cannot get anomaly_detection to run, memory issues

Hi all,

I am having problems running the 00_train.py script in the anomaly_detection benchmark, I am unsure what the cause is as regardless of the batch size I use the issue persists.

I found that by reducing the size of the train_data training data input to a length of around ~700000 items (train_data[:700000]) then it ran no problem. The weird thing is that my system, which sadly only has 32GB of RAM doesn't get close to running out of system memory when watching free -m. The first epoch will train until it is essentially finished then the error is thrown when moving to the second epoch the following error is thrown. I will keep googling but am hoping someone here has an idea of where the problem could be coming from

2022-06-01 14:46:55.918879: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 1806336000 exceeds 10% of free system memory.
2022-06-01 14:46:56.979387: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 1806336000 exceeds 10% of free system memory.
2022-06-01 14:46:57.674953: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 1806336000 exceeds 10% of free system memory.
2022-06-01 14:46:58.187186: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 1806336000 exceeds 10% of free system memory.
Epoch 1/100
1374/1379 [============================>.] - ETA: 0s - loss: 96.14442022-06-01 14:47:24.023059: W tensorflow/core/common_runtime/bfc_allocator.cc:462] Allocator (GPU_0_bfc) ran out of memory trying to allocate 191.41MiB (rounded to 200704000)requested by op _EagerConst
If the cause is memory fragmentation maybe the environment variable 'TF_GPU_ALLOCATOR=cuda_malloc_async' will improve the situation. 
Current allocation summary follows.
Current allocation summary follows.
2022-06-01 14:47:24.023086: I tensorflow/core/common_runtime/bfc_allocator.cc:1010] BFCAllocator dump for GPU_0_bfc
2022-06-01 14:47:24.023100: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (256): 	Total Chunks: 50, Chunks in use: 50. 12.5KiB allocated for chunks. 12.5KiB in use in bin. 544B client-requested in use in bin.
2022-06-01 14:47:24.023110: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (512): 	Total Chunks: 88, Chunks in use: 88. 44.2KiB allocated for chunks. 44.2KiB in use in bin. 44.0KiB client-requested in use in bin.
2022-06-01 14:47:24.023118: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (1024): 	Total Chunks: 1, Chunks in use: 1. 1.2KiB allocated for chunks. 1.2KiB in use in bin. 1.0KiB client-requested in use in bin.
2022-06-01 14:47:24.023126: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (2048): 	Total Chunks: 3, Chunks in use: 3. 7.5KiB allocated for chunks. 7.5KiB in use in bin. 7.5KiB client-requested in use in bin.
2022-06-01 14:47:24.023135: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (4096): 	Total Chunks: 6, Chunks in use: 6. 24.5KiB allocated for chunks. 24.5KiB in use in bin. 24.0KiB client-requested in use in bin.
2022-06-01 14:47:24.023143: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (8192): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-06-01 14:47:24.023149: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (16384): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-06-01 14:47:24.023156: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (32768): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-06-01 14:47:24.023164: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (65536): 	Total Chunks: 18, Chunks in use: 18. 1.24MiB allocated for chunks. 1.24MiB in use in bin. 1.12MiB client-requested in use in bin.
2022-06-01 14:47:24.023171: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (131072): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-06-01 14:47:24.023179: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (262144): 	Total Chunks: 7, Chunks in use: 6. 2.19MiB allocated for chunks. 1.88MiB in use in bin. 1.88MiB client-requested in use in bin.
2022-06-01 14:47:24.023186: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (524288): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-06-01 14:47:24.023193: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (1048576): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-06-01 14:47:24.023200: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (2097152): 	Total Chunks: 1, Chunks in use: 0. 2.44MiB allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-06-01 14:47:24.023207: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (4194304): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-06-01 14:47:24.023214: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (8388608): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-06-01 14:47:24.023221: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (16777216): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-06-01 14:47:24.023230: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (33554432): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-06-01 14:47:24.023237: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (67108864): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-06-01 14:47:24.023245: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (134217728): 	Total Chunks: 1, Chunks in use: 0. 181.04MiB allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2022-06-01 14:47:24.023253: I tensorflow/core/common_runtime/bfc_allocator.cc:1017] Bin (268435456): 	Total Chunks: 2, Chunks in use: 2. 3.36GiB allocated for chunks. 3.36GiB in use in bin. 3.36GiB client-requested in use in bin.
2022-06-01 14:47:24.023262: I tensorflow/core/common_runtime/bfc_allocator.cc:1033] Bin for 191.41MiB was 128.00MiB, Chunk State: 
2022-06-01 14:47:24.023275: I tensorflow/core/common_runtime/bfc_allocator.cc:1039]   Size: 181.04MiB | Requested Size: 4B | in_use: 0 | bin_num: 19, prev:   Size: 256B | Requested Size: 8B | in_use: 1 | bin_num: -1
2022-06-01 14:47:24.023280: I tensorflow/core/common_runtime/bfc_allocator.cc:1046] Next region of size 3808755712
2022-06-01 14:47:24.023288: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac000000 of size 1280 next 1
2022-06-01 14:47:24.023294: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac000500 of size 256 next 2
2022-06-01 14:47:24.023300: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac000600 of size 256 next 3
2022-06-01 14:47:24.023306: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac000700 of size 256 next 4
2022-06-01 14:47:24.023312: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac000800 of size 512 next 5
2022-06-01 14:47:24.023317: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac000a00 of size 256 next 8
2022-06-01 14:47:24.023323: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac000b00 of size 512 next 9
2022-06-01 14:47:24.023328: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac000d00 of size 512 next 10
2022-06-01 14:47:24.023334: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac000f00 of size 512 next 11
2022-06-01 14:47:24.023339: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac001100 of size 512 next 12
2022-06-01 14:47:24.023345: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac001300 of size 256 next 13
2022-06-01 14:47:24.023351: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac001400 of size 256 next 14
2022-06-01 14:47:24.023356: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac001500 of size 512 next 15
2022-06-01 14:47:24.023362: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac001700 of size 512 next 16
2022-06-01 14:47:24.023367: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac001900 of size 512 next 19
2022-06-01 14:47:24.023373: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac001b00 of size 512 next 20
2022-06-01 14:47:24.023378: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac001d00 of size 512 next 21
2022-06-01 14:47:24.023386: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac001f00 of size 128768 next 17
2022-06-01 14:47:24.023392: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac021600 of size 65536 next 18
2022-06-01 14:47:24.023397: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac031600 of size 512 next 22
2022-06-01 14:47:24.023403: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac031800 of size 512 next 25
2022-06-01 14:47:24.023409: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac031a00 of size 512 next 26
2022-06-01 14:47:24.023415: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac031c00 of size 512 next 27
2022-06-01 14:47:24.023420: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac031e00 of size 512 next 28
2022-06-01 14:47:24.023426: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac032000 of size 512 next 31
2022-06-01 14:47:24.023431: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac032200 of size 512 next 32
2022-06-01 14:47:24.023437: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac032400 of size 512 next 33
2022-06-01 14:47:24.023442: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac032600 of size 512 next 34
2022-06-01 14:47:24.023448: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac032800 of size 512 next 35
2022-06-01 14:47:24.023453: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac032a00 of size 256 next 36
2022-06-01 14:47:24.023459: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac032b00 of size 256 next 37
2022-06-01 14:47:24.023464: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac032c00 of size 256 next 38
2022-06-01 14:47:24.023470: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac032d00 of size 256 next 39
2022-06-01 14:47:24.023475: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac032e00 of size 256 next 42
2022-06-01 14:47:24.023481: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac032f00 of size 256 next 43
2022-06-01 14:47:24.023486: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac033000 of size 256 next 44
2022-06-01 14:47:24.023492: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac033100 of size 512 next 55
2022-06-01 14:47:24.023497: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac033300 of size 512 next 56
2022-06-01 14:47:24.023503: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac033500 of size 512 next 57
2022-06-01 14:47:24.023508: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac033700 of size 512 next 58
2022-06-01 14:47:24.023514: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac033900 of size 512 next 59
2022-06-01 14:47:24.023519: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac033b00 of size 512 next 60
2022-06-01 14:47:24.023525: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac033d00 of size 512 next 61
2022-06-01 14:47:24.023530: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac033f00 of size 512 next 63
2022-06-01 14:47:24.023536: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac034100 of size 512 next 64
2022-06-01 14:47:24.023541: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac034300 of size 512 next 65
2022-06-01 14:47:24.023547: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac034500 of size 512 next 66
2022-06-01 14:47:24.023552: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac034700 of size 512 next 67
2022-06-01 14:47:24.023558: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac034900 of size 256 next 71
2022-06-01 14:47:24.023563: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac034a00 of size 256 next 72
2022-06-01 14:47:24.023569: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac034b00 of size 256 next 73
2022-06-01 14:47:24.023574: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac034c00 of size 256 next 40
2022-06-01 14:47:24.023580: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac034d00 of size 4096 next 41
2022-06-01 14:47:24.023586: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac035d00 of size 512 next 45
2022-06-01 14:47:24.023592: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac035f00 of size 512 next 48
2022-06-01 14:47:24.023598: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac036100 of size 512 next 49
2022-06-01 14:47:24.023603: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac036300 of size 512 next 50
2022-06-01 14:47:24.023609: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac036500 of size 512 next 51
2022-06-01 14:47:24.023615: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac036700 of size 512 next 53
2022-06-01 14:47:24.023622: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac036900 of size 512 next 54
2022-06-01 14:47:24.023628: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac036b00 of size 768 next 46
2022-06-01 14:47:24.023633: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac036e00 of size 4096 next 47
2022-06-01 14:47:24.023638: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac037e00 of size 2560 next 68
2022-06-01 14:47:24.023642: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac038800 of size 256 next 76
2022-06-01 14:47:24.023647: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac038900 of size 256 next 77
2022-06-01 14:47:24.023652: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac038a00 of size 256 next 78
2022-06-01 14:47:24.023657: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac038b00 of size 256 next 79
2022-06-01 14:47:24.023661: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac038c00 of size 256 next 80
2022-06-01 14:47:24.023666: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac038d00 of size 256 next 81
2022-06-01 14:47:24.023671: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac038e00 of size 256 next 82
2022-06-01 14:47:24.023675: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac038f00 of size 256 next 83
2022-06-01 14:47:24.023680: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac039000 of size 512 next 85
2022-06-01 14:47:24.023685: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac039200 of size 512 next 86
2022-06-01 14:47:24.023690: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac039400 of size 512 next 87
2022-06-01 14:47:24.023694: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac039600 of size 512 next 88
2022-06-01 14:47:24.023697: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac039800 of size 512 next 89
2022-06-01 14:47:24.023701: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac039a00 of size 512 next 90
2022-06-01 14:47:24.023705: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac039c00 of size 512 next 91
2022-06-01 14:47:24.023709: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac039e00 of size 512 next 92
2022-06-01 14:47:24.023712: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03a000 of size 512 next 93
2022-06-01 14:47:24.023716: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03a200 of size 512 next 95
2022-06-01 14:47:24.023720: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03a400 of size 512 next 96
2022-06-01 14:47:24.023724: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03a600 of size 512 next 97
2022-06-01 14:47:24.023728: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03a800 of size 4096 next 98
2022-06-01 14:47:24.023731: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03b800 of size 256 next 99
2022-06-01 14:47:24.023736: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03b900 of size 256 next 100
2022-06-01 14:47:24.023740: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03ba00 of size 256 next 101
2022-06-01 14:47:24.023744: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03bb00 of size 4096 next 102
2022-06-01 14:47:24.023747: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03cb00 of size 512 next 103
2022-06-01 14:47:24.023751: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03cd00 of size 512 next 104
2022-06-01 14:47:24.023755: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03cf00 of size 512 next 105
2022-06-01 14:47:24.023759: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03d100 of size 512 next 107
2022-06-01 14:47:24.023762: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03d300 of size 512 next 108
2022-06-01 14:47:24.023766: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03d500 of size 512 next 109
2022-06-01 14:47:24.023770: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03d700 of size 512 next 111
2022-06-01 14:47:24.023774: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03d900 of size 512 next 112
2022-06-01 14:47:24.023778: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03db00 of size 512 next 113
2022-06-01 14:47:24.023781: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03dd00 of size 512 next 115
2022-06-01 14:47:24.023785: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03df00 of size 512 next 116
2022-06-01 14:47:24.023789: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03e100 of size 512 next 117
2022-06-01 14:47:24.023793: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03e300 of size 2560 next 119
2022-06-01 14:47:24.023797: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03ed00 of size 512 next 121
2022-06-01 14:47:24.023800: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03ef00 of size 512 next 122
2022-06-01 14:47:24.023804: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03f100 of size 512 next 123
2022-06-01 14:47:24.023808: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03f300 of size 512 next 124
2022-06-01 14:47:24.023812: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03f500 of size 512 next 125
2022-06-01 14:47:24.023816: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03f700 of size 512 next 126
2022-06-01 14:47:24.023820: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03f900 of size 512 next 128
2022-06-01 14:47:24.023823: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03fb00 of size 512 next 129
2022-06-01 14:47:24.023827: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03fd00 of size 512 next 130
2022-06-01 14:47:24.023831: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac03ff00 of size 512 next 132
2022-06-01 14:47:24.023835: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac040100 of size 512 next 133
2022-06-01 14:47:24.023838: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac040300 of size 512 next 134
2022-06-01 14:47:24.023842: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac040500 of size 4608 next 23
2022-06-01 14:47:24.023846: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac041700 of size 65536 next 24
2022-06-01 14:47:24.023850: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac051700 of size 65536 next 30
2022-06-01 14:47:24.023854: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac061700 of size 65536 next 29
2022-06-01 14:47:24.023858: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac071700 of size 65536 next 52
2022-06-01 14:47:24.023862: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac081700 of size 127232 next 6
2022-06-01 14:47:24.023866: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac0a0800 of size 327680 next 7
2022-06-01 14:47:24.023870: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac0f0800 of size 65536 next 62
2022-06-01 14:47:24.023874: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac100800 of size 327680 next 84
2022-06-01 14:47:24.023878: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac150800 of size 65536 next 94
2022-06-01 14:47:24.023881: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac160800 of size 65536 next 106
2022-06-01 14:47:24.023885: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac170800 of size 65536 next 110
2022-06-01 14:47:24.023889: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac180800 of size 65536 next 114
2022-06-01 14:47:24.023893: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac190800 of size 65536 next 70
2022-06-01 14:47:24.023897: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac1a0800 of size 327680 next 69
2022-06-01 14:47:24.023902: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1eac1f0800 of size 1806336000 next 74
2022-06-01 14:47:24.023906: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f17c98800 of size 1806336000 next 75
2022-06-01 14:47:24.023910: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83740800 of size 327680 next 118
2022-06-01 14:47:24.023913: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83790800 of size 327680 next 120
2022-06-01 14:47:24.023917: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f837e0800 of size 65536 next 127
2022-06-01 14:47:24.023921: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f837f0800 of size 65536 next 131
2022-06-01 14:47:24.023925: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83800800 of size 256 next 135
2022-06-01 14:47:24.023929: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83800900 of size 256 next 136
2022-06-01 14:47:24.023932: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83800a00 of size 256 next 137
2022-06-01 14:47:24.023936: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83800b00 of size 4096 next 138
2022-06-01 14:47:24.023940: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83801b00 of size 512 next 139
2022-06-01 14:47:24.023944: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83801d00 of size 512 next 140
2022-06-01 14:47:24.023948: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83801f00 of size 512 next 141
2022-06-01 14:47:24.023951: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83802100 of size 65536 next 142
2022-06-01 14:47:24.023955: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83812100 of size 512 next 143
2022-06-01 14:47:24.023959: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83812300 of size 512 next 144
2022-06-01 14:47:24.023963: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83812500 of size 512 next 145
2022-06-01 14:47:24.023967: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83812700 of size 65536 next 146
2022-06-01 14:47:24.023971: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83822700 of size 512 next 147
2022-06-01 14:47:24.023974: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83822900 of size 512 next 148
2022-06-01 14:47:24.023979: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83822b00 of size 512 next 149
2022-06-01 14:47:24.023982: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83822d00 of size 65536 next 150
2022-06-01 14:47:24.023986: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83832d00 of size 512 next 151
2022-06-01 14:47:24.023990: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83832f00 of size 512 next 152
2022-06-01 14:47:24.023994: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83833100 of size 512 next 153
2022-06-01 14:47:24.023998: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83833300 of size 327680 next 154
2022-06-01 14:47:24.024001: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83883300 of size 2560 next 155
2022-06-01 14:47:24.024005: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83883d00 of size 256 next 156
2022-06-01 14:47:24.024009: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83883e00 of size 256 next 157
2022-06-01 14:47:24.024013: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83883f00 of size 256 next 158
2022-06-01 14:47:24.024017: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884000 of size 256 next 159
2022-06-01 14:47:24.024020: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884100 of size 256 next 160
2022-06-01 14:47:24.024024: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884200 of size 256 next 161
2022-06-01 14:47:24.024028: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884300 of size 256 next 162
2022-06-01 14:47:24.024032: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884400 of size 256 next 163
2022-06-01 14:47:24.024035: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884500 of size 256 next 164
2022-06-01 14:47:24.024039: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884600 of size 256 next 165
2022-06-01 14:47:24.024043: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884700 of size 256 next 166
2022-06-01 14:47:24.024047: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884800 of size 256 next 167
2022-06-01 14:47:24.024051: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884900 of size 256 next 168
2022-06-01 14:47:24.024054: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884a00 of size 256 next 169
2022-06-01 14:47:24.024058: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884b00 of size 256 next 170
2022-06-01 14:47:24.024062: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884c00 of size 256 next 171
2022-06-01 14:47:24.024066: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83884d00 of size 256 next 172
2022-06-01 14:47:24.024069: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] Free  at 7f1f83884e00 of size 327680 next 232
2022-06-01 14:47:24.024073: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f838d4e00 of size 256 next 206
2022-06-01 14:47:24.024077: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] Free  at 7f1f838d4f00 of size 2556416 next 178
2022-06-01 14:47:24.024081: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] InUse at 7f1f83b45100 of size 256 next 179
2022-06-01 14:47:24.024085: I tensorflow/core/common_runtime/bfc_allocator.cc:1066] Free  at 7f1f83b45200 of size 189836800 next 18446744073709551615
2022-06-01 14:47:24.024088: I tensorflow/core/common_runtime/bfc_allocator.cc:1071]      Summary of in-use Chunks by size: 
2022-06-01 14:47:24.024094: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 50 Chunks of size 256 totalling 12.5KiB
2022-06-01 14:47:24.024099: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 87 Chunks of size 512 totalling 43.5KiB
2022-06-01 14:47:24.024104: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 1 Chunks of size 768 totalling 768B
2022-06-01 14:47:24.024108: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 1 Chunks of size 1280 totalling 1.2KiB
2022-06-01 14:47:24.024113: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 3 Chunks of size 2560 totalling 7.5KiB
2022-06-01 14:47:24.024117: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 5 Chunks of size 4096 totalling 20.0KiB
2022-06-01 14:47:24.024121: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 1 Chunks of size 4608 totalling 4.5KiB
2022-06-01 14:47:24.024126: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 16 Chunks of size 65536 totalling 1.00MiB
2022-06-01 14:47:24.024130: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 1 Chunks of size 127232 totalling 124.2KiB
2022-06-01 14:47:24.024135: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 1 Chunks of size 128768 totalling 125.8KiB
2022-06-01 14:47:24.024139: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 6 Chunks of size 327680 totalling 1.88MiB
2022-06-01 14:47:24.024144: I tensorflow/core/common_runtime/bfc_allocator.cc:1074] 2 Chunks of size 1806336000 totalling 3.36GiB
2022-06-01 14:47:24.024148: I tensorflow/core/common_runtime/bfc_allocator.cc:1078] Sum Total of in-use chunks: 3.37GiB
2022-06-01 14:47:24.024152: I tensorflow/core/common_runtime/bfc_allocator.cc:1080] total_region_allocated_bytes_: 3808755712 memory_limit_: 3808755712 available bytes: 0 curr_region_allocation_bytes_: 7617511424
2022-06-01 14:47:24.024159: I tensorflow/core/common_runtime/bfc_allocator.cc:1086] Stats: 
Limit:                      3808755712
InUse:                      3616034816
MaxInUse:                   3626576640
NumAllocs:                      368395
MaxAllocSize:               1806336000
Reserved:                            0
PeakReserved:                        0
LargestFreeBlock:                    0

2022-06-01 14:47:24.024167: W tensorflow/core/common_runtime/bfc_allocator.cc:474] ************************************************************************************************____
Traceback (most recent call last):
  File "/home/alxhoff/git/GitHub/tiny/benchmark/training/anomaly_detection/00_train.py", line 208, in <module>
    history = model.fit(train_data[:len(train_data)],
  File "/home/alxhoff/.local/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/home/alxhoff/.local/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 106, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
tensorflow.python.framework.errors_impl.InternalError: Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run _EagerConst: Dst tensor is not ini

I should mention that as the error message suggest, TF_GPU_ALLOCATOR=cuda_malloc_async did not solve the issue.

Cheers,

Alex

Empty keras_model.py file in VWW

Can we load the VWW model from keras_model.py (copied from here) instead of from a library?

This adds a bit of transparency and will match the other benchmarks.

Reference keyword recognition. Compilation error in ws_bootstrap_ffn.c

Overview

I tried to reproduce the keyword recognition benchmark using the same hardware (Arduino Uno rev3, STM32-LPM01-XN_PowerShield, STM32-Nucleo-l4r5zi). I followed this guide up until the compilation of the exported model. The compilation failed with an exception related to the usage of tr_info in ws_bootstrap_ffn.c.

Environment

Windows 10 21H2
WSL 2 Ubuntu 22.04
python 3.8.x

Conda env

name: mbed
channels:
  - conda-forge
  - defaults
dependencies:
  - _libgcc_mutex=0.1=conda_forge
  - _openmp_mutex=4.5=2_gnu
  - bzip2=1.0.8=h7f98852_4
  - ca-certificates=2022.9.24=ha878542_0
  - ld_impl_linux-64=2.39=hc81fddc_0
  - libffi=3.4.2=h7f98852_5
  - libgcc-ng=12.2.0=h65d4601_19
  - libgomp=12.2.0=h65d4601_19
  - libnsl=2.0.0=h7f98852_0
  - libsqlite=3.39.4=h753d276_0
  - libuuid=2.32.1=h7f98852_1000
  - libzlib=1.2.13=h166bdaf_4
  - ncurses=6.3=h27087fc_1
  - openssl=3.0.7=h166bdaf_0
  - pip=22.3=pyhd8ed1ab_0
  - python=3.8.13=ha86cf86_0_cpython
  - readline=8.1.2=h0f457ee_0
  - setuptools=65.5.0=pyhd8ed1ab_0
  - sqlite=3.39.4=h4ff8645_0
  - tk=8.6.12=h27826a3_0
  - wheel=0.37.1=pyhd8ed1ab_0
  - xz=5.2.6=h166bdaf_0
  - pip:
    - appdirs==1.4.4
    - beautifulsoup4==4.6.3
    - cbor==1.0.0
    - certifi==2022.9.24
    - cffi==1.15.1
    - charset-normalizer==2.1.1
    - click==7.1.2
    - cmsis-pack-manager==0.2.10
    - colorama==0.3.9
    - cryptography==3.4.8
    - fasteners==0.18
    - future==0.18.2
    - idna==2.7
    - intelhex==2.3.0
    - jinja2==3.1.2
    - jsonschema==2.6.0
    - junit-xml==1.8
    - lockfile==0.12.2
    - markupsafe==2.1.1
    - mbed-cli==1.10.5
    - mbed-greentea==1.8.14
    - mbed-host-tests==1.8.14
    - mbed-ls==1.8.14
    - mbed-os-tools==1.8.14
    - milksnake==0.1.5
    - prettytable==2.5.0
    - psutil==5.6.7
    - pycparser==2.21
    - pycryptodome==3.15.0
    - pyelftools==0.28
    - pyopenssl==21.0.0
    - pyserial==3.4
    - pyusb==1.2.1
    - pyyaml==6.0
    - requests==2.28.1
    - six==1.12.0
    - soupsieve==2.3.2.post1
    - urllib3==1.26.12
    - urllib3-secure-extra==0.1.0
    - wcwidth==0.2.5

Issue

after running setup_example.sh I continued with compiling the model into the binaries i'd like to upload by entering the following command into wsl

mbed compile -m NUCLEO_L4R5ZI -t GCC_ARM -v

The compilation succeeded up around 40% and I got the following error:

[mbed] Working path "/mnt/c/Users/nachtman/Repositories/mixed_projects/ml_benchmark/tiny-0.7/benchmark/reference_submissions/keyword_spotting" (library)
[mbed] Program path "/mnt/c/Users/nachtman/Repositories/mixed_projects/ml_benchmark/tiny-0.7/benchmark/reference_submissions/keyword_spotting"
[mbed] Exec "/home/nachtman/miniconda3/envs/mbed/bin/python3.8 -u /mnt/c/Users/nachtman/Repositories/mixed_projects/ml_benchmark/tiny-0.7/benchmark/reference_submissions/keyword_spotting/mbed-os/tools/make.py -t GCC_ARM -m NUCLEO_L4R5ZI --source . --build ./BUILD/NUCLEO_L4R5ZI/GCC_ARM -v" in "/mnt/c/Users/nachtman/Repositories/mixed_projects/ml_benchmark/tiny-0.7/benchmark/reference_submissions/keyword_spotting"
[Warning] @,: Compiler version mismatch: Have 10.3.1; expected version >= 9.0.0 and < 10.0.0
Building project keyword_spotting (NUCLEO_L4R5ZI, GCC_ARM)
Scan: keyword_spotting
....
....
Compile [ 39.7%]: ws_bootstrap_ffn.c
Compile: arm-none-eabi-gcc -c -std=gnu11 -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -fmessage-length=0 -fno-exceptions -ffunction-sections -fdata-sections -funsigned-char -MMD -fomit-frame-pointer -Os -g -DMBED_TRAP_ERRORS_ENABLED=1 -DMBED_MINIMAL_PRINTF -mcpu=cortex-m4 -mthumb -mfpu=fpv4-sp-d16 -mfloat-abi=softfp -DMBED_ROM_START=0x8000000 -DMBED_ROM_SIZE=0x200000 -DMBED_RAM_START=0x20000000 -DMBED_RAM_SIZE=0xa0000 -DTOOLCHAIN_GCC -DTARGET_STM32L4 -DDEVICE_STDIO_MESSAGES=1 -DMBED_TICKLESS -DDEVICE_SERIAL_ASYNCH=1 -DUSE_FULL_LL_DRIVER -DTARGET_LIKE_CORTEX_M4 -DDEVICE_PORTOUT=1 -DDEVICE_LPTICKER=1 -DTOOLCHAIN_GCC_ARM -DDEVICE_I2CSLAVE=1 -DDEVICE_PWMOUT=1 -DTARGET_MCU_STM32L4 -DTARGET_LIKE_MBED -DSTM32L4R5xx -D__CORTEX_M4 -DDEVICE_RESET_REASON=1 -DDEVICE_SERIAL_FC=1 -DDEVICE_CRC=1 -DDEVICE_I2C_ASYNCH=1 -DUSE_HAL_DRIVER -D__CMSIS_RTOS -DARM_MATH_CM4 -DDEVICE_PORTINOUT=1 -DTARGET_MCU_STM32 -DEXTRA_IDLE_STACK_REQUIRED -DDEVICE_INTERRUPTIN=1 -DDEVICE_SPISLAVE=1 -DDEVICE_SLEEP=1 -DTARGET_RTOS_M4_M7 -DTRANSACTION_QUEUE_SIZE_SPI=2 -DTARGET_FF_ARDUINO_UNO -DTARGET_M4 -DMBED_BUILD_TIMESTAMP=1667990396.8247628 -DTARGET_NAME=NUCLEO_L4R5ZI -DTARGET_STM -DDEVICE_RTC=1 -DTARGET_RELEASE -DDEVICE_PORTIN=1 -DDEVICE_SPI_ASYNCH=1 -DTARGET_NUCLEO_L4R5ZI -DDEVICE_ANALOGOUT=1 -DDEVICE_USBDEVICE=1 -DDEVICE_TRNG=1 -DDEVICE_ANALOGIN=1 -DTARGET_CORTEX_M -DTARGET_STM32L4R5ZI -DTARGET_STM32L4R5xI -D__MBED__=1 -D__MBED_CMSIS_RTOS_CM -DDEVICE_USTICKER=1 -D__FPU_PRESENT=1 -DDEVICE_I2C=1 -DDEVICE_WATCHDOG=1 -DDEVICE_SPI=1 -DCOMPONENT_FLASHIAP=1 -DDEVICE_MPU=1 -DTARGET_CORTEX -DDEVICE_SERIAL=1 -DDEVICE_CAN=1 -DDEVICE_FLASH=1 -DTARGET_MCU_STM32L4R5xI @./BUILD/NUCLEO_L4R5ZI/GCC_ARM/.includes_af75f1246208ed7c14125f18fb2517ec.txt -include ./BUILD/NUCLEO_L4R5ZI/GCC_ARM/mbed_config.h -MD -MF BUILD/NUCLEO_L4R5ZI/GCC_ARM/mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.d -o BUILD/NUCLEO_L4R5ZI/GCC_ARM/mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.o ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c
[Error] ws_bootstrap_ffn.c@278,95: expected ')' before 'PRIu64'
[Warning] ws_bootstrap_ffn.c@278,33: format '%s' expects a matching 'char *' argument [-Wformat=]
[Warning] ws_bootstrap_ffn.c@278,33: format '%hu' expects a matching 'int' argument [-Wformat=]
[Warning] ws_bootstrap_ffn.c@278,33: spurious trailing '%' in format [-Wformat=]
[Warning] ws_bootstrap_ffn.c@277,29: unused variable 'ret' [-Wunused-variable]
[Error] ws_bootstrap_ffn.c@311,102: expected ')' before 'PRIu64'
[Warning] ws_bootstrap_ffn.c@311,41: format '%s' expects a matching 'char *' argument [-Wformat=]
[Warning] ws_bootstrap_ffn.c@311,41: format '%ld' expects a matching 'long int' argument [-Wformat=]
[Warning] ws_bootstrap_ffn.c@311,41: format '%lu' expects a matching 'long unsigned int' argument [-Wformat=]
[Warning] ws_bootstrap_ffn.c@311,41: spurious trailing '%' in format [-Wformat=]
[Warning] ws_bootstrap_ffn.c@310,37: unused variable 'ret' [-Wunused-variable]
[DEBUG] Return: 1
[DEBUG] Output: In file included from ./mbed-os/platform/mbed-trace/include/mbed-trace/ns_trace.h:35,
[DEBUG] Output:                  from ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:22:
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c: In function 'ws_bootstrap_ffn_dhcp_info_notify_cb':
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:95: error: expected ')' before 'PRIu64'
[DEBUG] Output:   278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
[DEBUG] Output:       |                                                                                               ^~~~~~
[DEBUG] Output: ./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
[DEBUG] Output:   129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
[DEBUG] Output:       |                                                                               ^~~~~~~~~~~
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:33: warning: format '%s' expects a matching 'char *' argument [-Wformat=]
[DEBUG] Output:   278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
[DEBUG] Output:       |                                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[DEBUG] Output: ./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
[DEBUG] Output:   129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
[DEBUG] Output:       |                                                                               ^~~~~~~~~~~
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:62: note: format string is defined here
[DEBUG] Output:   278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
[DEBUG] Output:       |                                                             ~^
[DEBUG] Output:       |                                                              |
[DEBUG] Output:       |                                                              char *
[DEBUG] Output: In file included from ./mbed-os/platform/mbed-trace/include/mbed-trace/ns_trace.h:35,
[DEBUG] Output:                  from ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:22:
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:33: warning: format '%hu' expects a matching 'int' argument [-Wformat=]
[DEBUG] Output:   278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
[DEBUG] Output:       |                                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[DEBUG] Output: ./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
[DEBUG] Output:   129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
[DEBUG] Output:       |                                                                               ^~~~~~~~~~~
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:33: warning: spurious trailing '%' in format [-Wformat=]
[DEBUG] Output:   278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
[DEBUG] Output:       |                                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[DEBUG] Output: ./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
[DEBUG] Output:   129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
[DEBUG] Output:       |                                                                               ^~~~~~~~~~~
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:277:29: warning: unused variable 'ret' [-Wunused-variable]
[DEBUG] Output:   277 |                         int ret = ns_time_system_timezone_info_notify(&time_configuration);
[DEBUG] Output:       |                             ^~~
[DEBUG] Output: In file included from ./mbed-os/platform/mbed-trace/include/mbed-trace/ns_trace.h:35,
[DEBUG] Output:                  from ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:22:
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:102: error: expected ')' before 'PRIu64'
[DEBUG] Output:   311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
[DEBUG] Output:       |                                                                                                      ^~~~~~
[DEBUG] Output: ./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
[DEBUG] Output:   129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
[DEBUG] Output:       |                                                                               ^~~~~~~~~~~
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:41: warning: format '%s' expects a matching 'char *' argument [-Wformat=]
[DEBUG] Output:   311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
[DEBUG] Output:       |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
[DEBUG] Output: ./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
[DEBUG] Output:   129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
[DEBUG] Output:       |                                                                               ^~~~~~~~~~~
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:56: note: format string is defined here
[DEBUG] Output:   311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
[DEBUG] Output:       |                                                       ~^
[DEBUG] Output:       |                                                        |
[DEBUG] Output:       |                                                        char *
[DEBUG] Output: In file included from ./mbed-os/platform/mbed-trace/include/mbed-trace/ns_trace.h:35,
[DEBUG] Output:                  from ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:22:
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:41: warning: format '%ld' expects a matching 'long int' argument [-Wformat=]
[DEBUG] Output:   311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
[DEBUG] Output:       |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
[DEBUG] Output: ./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
[DEBUG] Output:   129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
[DEBUG] Output:       |                                                                               ^~~~~~~~~~~
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:41: warning: format '%lu' expects a matching 'long unsigned int' argument [-Wformat=]
[DEBUG] Output:   311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
[DEBUG] Output:       |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
[DEBUG] Output: ./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
[DEBUG] Output:   129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
[DEBUG] Output:       |                                                                               ^~~~~~~~~~~
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:41: warning: spurious trailing '%' in format [-Wformat=]
[DEBUG] Output:   311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
[DEBUG] Output:       |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
[DEBUG] Output: ./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
[DEBUG] Output:   129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
[DEBUG] Output:       |                                                                               ^~~~~~~~~~~
[DEBUG] Output: ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:310:37: warning: unused variable 'ret' [-Wunused-variable]
[DEBUG] Output:   310 |                                 int ret = ns_time_system_time_write(network_time);
[DEBUG] Output:       |                                     ^~~
Traceback (most recent call last):
  File "/mnt/c/Users/nachtman/Repositories/mixed_projects/ml_benchmark/tiny-0.7/benchmark/reference_submissions/keyword_spotting/mbed-os/tools/toolchains/mbed_toolchain.py", line 558, in compile_queue
    self.compile_output([
  File "/mnt/c/Users/nachtman/Repositories/mixed_projects/ml_benchmark/tiny-0.7/benchmark/reference_submissions/keyword_spotting/mbed-os/tools/toolchains/mbed_toolchain.py", line 686, in compile_output
    raise ToolException(stderr)
tools.utils.ToolException: In file included from ./mbed-os/platform/mbed-trace/include/mbed-trace/ns_trace.h:35,
                 from ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:22:
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c: In function 'ws_bootstrap_ffn_dhcp_info_notify_cb':
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:95: error: expected ')' before 'PRIu64'
  278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
      |                                                                                               ^~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:33: warning: format '%s' expects a matching 'char *' argument [-Wformat=]
  278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
      |                                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:62: note: format string is defined here
  278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
      |                                                             ~^
      |                                                              |
      |                                                              char *
In file included from ./mbed-os/platform/mbed-trace/include/mbed-trace/ns_trace.h:35,
                 from ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:22:
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:33: warning: format '%hu' expects a matching 'int' argument [-Wformat=]
  278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
      |                                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:33: warning: spurious trailing '%' in format [-Wformat=]
  278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
      |                                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:277:29: warning: unused variable 'ret' [-Wunused-variable]
  277 |                         int ret = ns_time_system_timezone_info_notify(&time_configuration);
      |                             ^~~
In file included from ./mbed-os/platform/mbed-trace/include/mbed-trace/ns_trace.h:35,
                 from ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:22:
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:102: error: expected ')' before 'PRIu64'
  311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
      |                                                                                                      ^~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:41: warning: format '%s' expects a matching 'char *' argument [-Wformat=]
  311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
      |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:56: note: format string is defined here
  311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
      |                                                       ~^
      |                                                        |
      |                                                        char *
In file included from ./mbed-os/platform/mbed-trace/include/mbed-trace/ns_trace.h:35,
                 from ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:22:
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:41: warning: format '%ld' expects a matching 'long int' argument [-Wformat=]
  311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
      |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:41: warning: format '%lu' expects a matching 'long unsigned int' argument [-Wformat=]
  311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
      |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:41: warning: spurious trailing '%' in format [-Wformat=]
  311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
      |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:310:37: warning: unused variable 'ret' [-Wunused-variable]
  310 |                                 int ret = ns_time_system_time_write(network_time);
      |                                     ^~~


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/mnt/c/Users/nachtman/Repositories/mixed_projects/ml_benchmark/tiny-0.7/benchmark/reference_submissions/keyword_spotting/mbed-os/tools/make.py", line 74, in wrapped_build_project
    bin_file, update_file = build_project(
  File "/mnt/c/Users/nachtman/Repositories/mixed_projects/ml_benchmark/tiny-0.7/benchmark/reference_submissions/keyword_spotting/mbed-os/tools/build_api.py", line 618, in build_project
    objects = toolchain.compile_sources(resources, sorted(resources.get_file_paths(FileType.INC_DIR)))
  File "/mnt/c/Users/nachtman/Repositories/mixed_projects/ml_benchmark/tiny-0.7/benchmark/reference_submissions/keyword_spotting/mbed-os/tools/toolchains/mbed_toolchain.py", line 420, in compile_sources
    return self._compile_sources(resources, inc_dirs=inc_dirs)
  File "/mnt/c/Users/nachtman/Repositories/mixed_projects/ml_benchmark/tiny-0.7/benchmark/reference_submissions/keyword_spotting/mbed-os/tools/toolchains/mbed_toolchain.py", line 497, in _compile_sources
    return self.compile_queue(queue, objects)
  File "/mnt/c/Users/nachtman/Repositories/mixed_projects/ml_benchmark/tiny-0.7/benchmark/reference_submissions/keyword_spotting/mbed-os/tools/toolchains/mbed_toolchain.py", line 568, in compile_queue
    raise ToolException(err)
tools.utils.ToolException: In file included from ./mbed-os/platform/mbed-trace/include/mbed-trace/ns_trace.h:35,
                 from ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:22:
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c: In function 'ws_bootstrap_ffn_dhcp_info_notify_cb':
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:95: error: expected ')' before 'PRIu64'
  278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
      |                                                                                               ^~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:33: warning: format '%s' expects a matching 'char *' argument [-Wformat=]
  278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
      |                                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:62: note: format string is defined here
  278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
      |                                                             ~^
      |                                                              |
      |                                                              char *
In file included from ./mbed-os/platform/mbed-trace/include/mbed-trace/ns_trace.h:35,
                 from ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:22:
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:33: warning: format '%hu' expects a matching 'int' argument [-Wformat=]
  278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
      |                                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:278:33: warning: spurious trailing '%' in format [-Wformat=]
  278 |                         tr_info("Network Time configuration %s status:%"PRIu16" time stamp: %"PRIu64" deviation: %"PRId16" Time Zone: %"PRId16, ret == 0 ? "notified" : "notify FAILED", time_configuration.status, time_configuration.timestamp, time_configuration.deviation, time_configuration.timezone);
      |                                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:277:29: warning: unused variable 'ret' [-Wunused-variable]
  277 |                         int ret = ns_time_system_timezone_info_notify(&time_configuration);
      |                             ^~~
In file included from ./mbed-os/platform/mbed-trace/include/mbed-trace/ns_trace.h:35,
                 from ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:22:
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:102: error: expected ')' before 'PRIu64'
  311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
      |                                                                                                      ^~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:41: warning: format '%s' expects a matching 'char *' argument [-Wformat=]
  311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
      |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:56: note: format string is defined here
  311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
      |                                                       ~^
      |                                                        |
      |                                                        char *
In file included from ./mbed-os/platform/mbed-trace/include/mbed-trace/ns_trace.h:35,
                 from ./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:22:
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:41: warning: format '%ld' expects a matching 'long int' argument [-Wformat=]
  311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
      |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:41: warning: format '%lu' expects a matching 'long unsigned int' argument [-Wformat=]
  311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
      |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:311:41: warning: spurious trailing '%' in format [-Wformat=]
  311 |                                 tr_info("Network Time %s: Era:%"PRId32" Offset:%"PRIu32" old time: %"PRIu64" time: %"PRIu64, ret == 0 ? "updated" : "update FAILED", era, offset, current_time, network_time);
      |                                         ^~~~~~~~~~~~~~~~~~~~~~~~
./mbed-os/platform/mbed-trace/include/mbed-trace/mbed_trace.h:129:79: note: in definition of macro 'tr_info'
  129 | #define tr_info(...)            mbed_tracef(TRACE_LEVEL_INFO,    TRACE_GROUP, __VA_ARGS__)   //!< Print info message
      |                                                                               ^~~~~~~~~~~
./mbed-os/connectivity/nanostack/sal-stack-nanostack/source/6LoWPAN/ws/ws_bootstrap_ffn.c:310:37: warning: unused variable 'ret' [-Wunused-variable]
  310 |                                 int ret = ns_time_system_time_write(network_time);
      |                                     ^~~

[mbed] ERROR: "/home/nachtman/miniconda3/envs/mbed/bin/python3.8" returned error.
       Code: 1
       Path: "/mnt/c/Users/nachtman/Repositories/mixed_projects/ml_benchmark/tiny-0.7/benchmark/reference_submissions/keyword_spotting"
       Command: "/home/nachtman/miniconda3/envs/mbed/bin/python3.8 -u /mnt/c/Users/nachtman/Repositories/mixed_projects/ml_benchmark/tiny-0.7/benchmark/reference_submissions/keyword_spotting/mbed-os/tools/make.py -t GCC_ARM -m NUCLEO_L4R5ZI --source . --build ./BUILD/NUCLEO_L4R5ZI/GCC_ARM -v"

Changelog

Is there a comprehensive changelog for the versions?
May I compare results achieved using v0.7 compare with the reference results proposed by Harvard in v0.5?

anomaly in get_dataset.sh for anomaly_detection?

The sh file tiny/v0.1/training/anomaly_detection/get_dataset.sh seems to write data files supposed to be used for eval in the training data directory.

#!/bin/sh

URL1="https://zenodo.org/record/3678171/files/dev_data_ToyCar.zip?download=1"
ZIPFILE="dev_data_ToyCar.zip"

URL2="https://zenodo.org/record/3727685/files/eval_data_train_ToyCar.zip?download=1"

mkdir -p dev_data

curl $URL1 -o $ZIPFILE || wget $URL1 -O $ZIPFILE
unzip $ZIPFILE -d dev_data
rm $ZIPFILE

curl $URL2 -o $ZIPFILE || wget $URL2 -O $ZIPFILE
unzip $ZIPFILE -d dev_data
rm $ZIPFILE

Should be:

#!/bin/sh

URL1="https://zenodo.org/record/3678171/files/dev_data_ToyCar.zip?download=1"
ZIPFILE="dev_data_ToyCar.zip"

URL2="https://zenodo.org/record/3727685/files/eval_data_train_ToyCar.zip?download=1"

mkdir -p dev_data

curl $URL1 -o $ZIPFILE || wget $URL1 -O $ZIPFILE
unzip $ZIPFILE -d dev_data
rm $ZIPFILE

curl $URL2 -o $ZIPFILE || wget $URL2 -O $ZIPFILE
unzip $ZIPFILE -d eval_data
rm $ZIPFILE

Or is it wanted? You don't mind about evaluation here so you take advantage of all the data for training?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.