Giter Site home page Giter Site logo

bmw-innovationlab / bmw-tensorflow-training-gui Goto Github PK

View Code? Open in Web Editor NEW
957.0 45.0 164.0 265.84 MB

This repository allows you to get started with a gui based training a State-of-the-art Deep Learning model with little to no configuration needed! NoCode training with TensorFlow has never been so easy.

License: Apache License 2.0

Python 92.31% JavaScript 0.07% TypeScript 3.03% CSS 0.85% HTML 1.83% Shell 0.36% Dockerfile 0.25% Less 1.30%
tensorflow deep-learning object-detection computer-vision gui tensorflow-gui deeplearning objectdetection rest-api computervision

bmw-tensorflow-training-gui's Introduction

Tensorflow 2 Object Detection Training GUI for Linux

Updated for CUDA 11 and Tensorflow 2!!!

This repository allows you to get started with training a State-of-the-art Deep Learning model with little to no configuration needed! You provide your labeled dataset and you can start the training right away and monitor it with TensorBoard. You can even test your model with our built-in Inference REST API. Training with TensorFlow has never been so easy.

You can also use our BMW-Labeltool-lite to label your dataset. The images and labels can be used directly for training.

  • This repository is based on Tensorflow Object Detection API
  • The tensorflow version used is in this repo is 2.5.0
  • All supported networks in this project are taken from the tensorflow model zoo
  • All training are made using pre-trained network weights.
  • The pre-trained weights that you can use out of the box are based on the COCO dataset.
  • The app was tested with Google Chrome and it is recommended to use Chrome when training.
  • This repository support training on both CPU and on multiple GPU (up to 2 GPUs)

Prerequisites

  • Ubuntu 18.04
  • NVIDIA Drivers (418.x or higher)
  • Docker CE latest stable release
  • NVIDIA Docker 2
  • Docker-Compose

Setting Up Project Requirements Automated

This step is recommended to be able to run the solution correctly.

The setup script will check and adjust all the requirements needed based on the user input.

  • Run the following command

    chmod +x setup_solution_parameters.sh && source setup_solution_parameters.sh

    • The script will check if docker and docker-compose are installed and if not, it will install them.

    • You will be prompted to choose build architecture GPU/CPU for the training solution.

      • If you chose GPU the script will check if nvidia-docker is installed and if not, it will install it.
    • The script will prompt all the network interfaces that are available so you can select which interface you want to extract the ip address from it

    • You will be prompted to choose if you want to set up a proxy

    • You will be prompted to select all the pre-trained weights network that you want to be downloaded during the docker image build (use the up/down arrow to go up and down, space to select/unselect, enter to submit your selection and esc to quit)

Setting Up Project Requirements Manually

How to check for prerequisites

To check if you have docker-ce installed:

docker --version

To check if you have docker-compose installed:

docker-compose --version

To check if you have nvidia-docker installed:

dpkg -l | grep nvidia-docker

To check your nvidia drivers version, open your terminal and type the command nvidia-smi

Installing Prerequisites

  • If you don't have neither docker nor docker-compose use the following command

    chmod +x install_full.sh && source install_full.sh

  • If you have docker ce installed and wish only to install docker-compose and perform necessary operations, use the following command

    chmod +x install_compose.sh && source install_compose.sh

  • Install NVIDIA Drivers (418.x or higher) and NVIDIA Docker for GPU training by following the official docs

  • Make sure that the .gitkeep files in datasets, checkpoints, tensorboards and inference_api/models folder are deleted. (.gitkeep files are placeholder files used for git)

  • Make sure that the base_dir field in docker_sdk_api/assets/paths.json is correct (it must match the path of the root of the repo on your machine).

  • Make sure that the image_name field in docker_sdk_api/assets/paths.json is correct (it must match your chosen architecture for the training tf2_training_api_cpu or tf2_training_api_gpu).

  • Go to gui/src/environments/environment.ts and gui/src/environments/environment.prod.ts and change the following:

    • field url: must match the IP address of your machine

    • the IP field of the inferenceAPIUrl : must match the IP address of your machine (Use the ifconfig command to check your IP address . Please use your private IP which starts by either 10. or 172.16. or 192.168.)

      "environment.ts"
      environment.ts

      "environment.prod.ts"
      environment.prod.ts

If you are behind a proxy:

  • Enter you proxy settings in the <base-dir>/proxy.json file

  • Enter the following command:

      python3 set_proxy_args.py

Dataset Folder Structure

The following is an example of how a dataset should be structured. Please put all your datasets in the datasets folder.

├──datasets/
    ├──sample_dataset/
        ├── images
        │   ├── img_1.jpg
        │   └── img_2.jpg
        ├── labels
        │   ├── json
        │   │   ├── img_1.json
        │   │   └── img_2.json
        │   └── pascal
        │       ├── img_1.xml
        │       └── img_2.xml
        └── objectclasses.json

PS: you don't need to have both json and pascal folders. Either one is enough

  • If you want to label your images, you can use our BMW-LabelTool-Lite which is a free, open-source image annotation tool. This tool supports our JSON label format

Objectclasses.json file example

You must include in your dataset an objectclasses.json file with a similar structure to the example below:

Midweight and Heavyweight Solution

Midweight: Downloading specific supported online pre-trained weights during the docker image build.
To do that, open the json file training_api/assets/networks.json and change the values of the networks you wish to download to true.

Heavyweight (default): Downloading all the supported online pre-trained weights during the docker image build.
To do that, open the json file training_api/assets/networks.json and change the value of "select_all" to true.

PS: if you don’t download the weights during the build, you won’t be able to use the network during the training unless you rebuild the solution with the proper network chosen.

All the training are made using the pre-trained network weight based on coco dataset.

Build the Solution

If you wish want to deploy the training workflow in GPU mode, please write the following command from the repository's root directory

docker-compose -f build_gpu.yml build

If you wish want to deploy the training workflow in CPU mode, please write the following command from the repository's root directory

docker-compose -f build_cpu.yml build 

Run the Solution

If you wish want to deploy the training workflow in GPU mode, please write the following command

docker-compose -f run_gpu.yml up

If you wish to deploy the training workflow in CPU mode, please write the following command

docker-compose -f run_cpu.yml up

After a successful run you should see something like the following:

Usage

  • If the app is deployed on your machine: open your web browser and type the following: localhost:4200 or 127.0.0.1:4200

  • If the app is deployed on a different machine: open your web browser and type the following: <machine_ip>:4200

1- Preparing Dataset

Prepare your dataset for training


2- Specifying General Settings

Specify the general parameters for you docker container


3- Specifying Hyperparameters

Specify the hyperparameters for the training job


4- Specifying Hyperparameters advanced

Specify the advanced hyperparameters for the training job


5- Checking training logs

Check your training logs to get better insights on the progress of the training


6- Monitoring the training

Monitor the training using Tensorboard


7- Checking the status of the job

Check the status to know when the job is completed successfully


8- Downloading and test with Swagger

Download your mode and easily test it with the built-in inference API using Swagger


9- Stopping and Delete the model's container

Delete the container's job to stop an ongoing job or to remove the container of a finished job. (Finished jobs are always available to download)


10- Visualizing graphs and metrics of Deleted Jobs

Visualize graphs and metrics of Deleted Jobs with Tensorboard


Training and Tensorboard Tips

Check our tips document to have (1) (a better insight on training models based on our expertise) and (2) (a benchmark of the inference speed).

Our tensorboard document helps you find your way more easily while navigating tensorboard

Guidelines

  • In advanced configuration mode, be careful while making the changes because it can cause errors while training. If that happens, stop the job and try again.

    • the paths in fine_tune_checkpoint should be entered manually when you choose train from checkpoint

    • Scroll down to find fine_tune_checkpoint and replace with your network-name and checkpoint name:

      • fine_tune_checkpoint :

        fine_tune_checkpoint: "/checkpoints/<network-name>/<name-of-the-checkpoint>/ckpt-0"

  • In general settings, choose carefully the container name because choosing a name used by another container will cause errors.

  • When you try to monitor the job using tensorboard the page may not open, wait for some seconds and refresh the page.

  • When you leave tensorboard open for a long time, it might freeze. When encountered with such issue simply closing tensorboard tab in the browser and reopening it will solve the problem.



Change Docker-sdk default port

To change the docker-sdk default port 2222 to any other port of your choice:

  • change the uvicorn port inside docker_sdk_api/docker/Dockerfile to the port of your choice

  • rebuild the docker-sdk image using the following commend in the root of the repo:

    • docker-compose -f build.yml build docker_sdk
  • change the baseEndPoint : <port-of-your-choice inside gui/src/environments/environment.ts and gui/src/environments/environment.prod.ts

  • rebuild the GUI image using the following comment in the root of the repo:

    • docker-compose -f build.yml build user_interface
  • after this you can run the solution as following: docker-compose -f run.yml up



Know Issues

You might face some errors in some cases during the training. Most common ones are:

  • The running container has no RepoTag please kill to proceed: Container ID: <id-container> This issue is caused by some container not having a name, in that case you should rename that container or kill (make sure it is safe to remove this container) it via docker kill <id-container>.
  • Job Not Started: 404 Client Error Not Found("pull access denied for <image-name>, repository does not exists or may require 'docker login' ...) this issue is cause when you are trying to run a training docker image that you don't have. The main reason of this is not properly building the training_api or not setting up project requirements please refer to Setting Up Project Requirements section in the documentation.
  • Dataset Not Valid this error means that you dataset structure is not valid or the images/labels formate are not supported.
  • Training job not started after general settings step: One of the main reason is that the paths are not adjusted in docker_sdk_api/assets/paths.json field base_dir. You can solve this issue by running ./setup_solution_parameters.sh and choosing the training version you want to use GPU/CPU.

Citing

If you use this repository in your research, consider citing it using the following Bibtex entry:

@misc{bmwtrainingtool,
  author = {BMW TechOffice MUNICH},
  title = {TensorFlow Training GUI},
  howpublished = {\url{https://github.com/BMW-InnovationLab/BMW-TensorFlow-Training-GUI}},
  year = {2022},
}

Acknowledgments

  • Hadi Koubeissy, inmind.ai, Beirut, Lebanon

  • Ismail Shehab, inmind.ai, Beirut, Lebanon

  • Joe Sleiman, inmind.ai, Beirut, Lebanon

  • Jimmy Tekli, BMW Innovation Lab, Munich, Germany

  • Chafic Abou Akar, BMW TechOffice, Munich, Germany

bmw-tensorflow-training-gui's People

Contributors

chaficii avatar danieljess avatar danielporta avatar hadikoub avatar marc-kamradt avatar mariokhoury4 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bmw-tensorflow-training-gui's Issues

Performance issues in training_api/research/object_detection/utils/ops.py

Hello, I found that in training_api/research/object_detection/utils/ops.py, tf.zeros andtf.shapeare repeatedly created in the loop, here. As they ratain same in each iteration, I think they should be moved before the loop to avoid creating redundant nodes in the tf computation graph.

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

network mode of tf_docker_sdk

Thank you for sharing your great repository for me. I want to know the reason why you set only network mode of tf_docker_sdk as "host" while the others belong to bridge network.

Thanks.

Performance issues in training_api/research/ (by P3)

Hello! I've found a performance issue in your program:

  • tf.Session being defined repeatedly leads to incremental overhead.

You can make your program more efficient by fixing this bug. Here is the Stack Overflow post to support it.

Below is detailed description about tf.Session being defined repeatedly:

  • in object_detection/eval_util.py: sess = tf.Session(master, graph=tf.get_default_graph())(line 273) is defined in the function _run_checkpoint_once(line 211) which is repeatedly called in the loop while True:(line 431).
  • in slim/datasets/download_and_convert_cifar10.py: with tf.Session('') as sess:(line 91) is defined in the function _add_to_tfrecord(line 64) which is repeatedly called in the loop for i in range(_NUM_TRAIN_FILES):(line 184).

tf.Session being defined repeatedly could lead to incremental overhead. If you define tf.Session out of the loop and pass tf.Session as a parameter to the loop, your program would be much more efficient.

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

URL malformed on RHEL

Hi!

I am trying to install the BMW Training GUI on a RHEL machine --> AC922, IBM Power architecture (ppc64le).

I reach the landing page with http://172.22.68.64:4200/training, but then I get a notification:

'The URI is malformed' in Firefox.

URL_malformed

Failed to execute 'open' on 'XMLHttpRequest': Invalid URL in chromium:

failed_to_execute_open_on_xmlhttprequest_invalid_url

The requesting machine is in the same network and has the operating system: Ubuntu 18.04.

I installed the setup also on the requesting machine Ubuntu 18.04. and it worked (i don´t think it´s network related). I´ve been searching in the code for anything which could cause the error, but I cannot find it.

Does anyone have an idea what causes the problem or even a solution on how the problem can be solved?

I appreciate any hints. Thanks.

Issue with image name in docker_sdk : "tf2_docker_sdk_api:latest" for docker compose in run_gpu.yml

After completing a successful GPU build of the tool, While executing Docker compose an error occurs stating that "tf2_docker_sdk_api:latest " does not exist or that permission to access is denied.
image
Below are the images created post the successful build(with build_gpu.yaml)..

image
Am I missing something here and if yes where does the tf2_docker_sdk_api exist?Or is this a naming issue conflicting with the run_gpu.yaml under docker_sdk?
where tf2_docker_sdk_api should be replaced with "tf2_training_api_gpu "(since mine is a gpu build)
It seems to be working when this is done

Support for IBM Power Architecture (ppc64le)

Hello,

is it possible to get support for IBM Power architecture (ppc64le) / crossbuilds?
The requirements should be met with existing GPU server models (S822LC, AC922 and IC922)

Thanks!

Creating job crashing between step 2 and 3

Hello !

I try to start a job and after clicking next in "General Settings" it loads for 30 sec then displays "An error has occurred". I tested on Chrome and Firefox, in localhost I get the error "Cross-origin request blocked" in the browser console.
I modified the environment.ts config file to use the right domain name.
I did not find any interesting logs in the 3 containers.

Do you have any idea where the problem could come from?
Thanks in advance

Unsupported config option for services.docker_sdk: 'runtime

Hello,

I am having a problem running the GPU version- I get an error:
Unsupported config option for services.docker_sdk: 'runtime'

I've been through the prerequisites and everything seems ok.
docker is showing 'nvidia' as a runtime, and the test nvidia docker works

I have also tried running the the CPU version- the GUI version runs, although I do get an 'unknown error' afer the 'general settings page- but guessing this might be because I built with the GPU option

Screenshot from 2022-04-24 11-56-10
Screenshot from 2022-04-24 11-57-49

thanks

Andrew

colab

Is there any method to run the same thing on colab for training purpose?

Error while building docker

Step 7/12 : RUN $(npm bin)/ng build --prod --build-optimizer
---> Running in 8e8f5cdb6770
Browserslist: caniuse-lite is outdated. Please run next command npm update

Date: 2019-12-20T12:44:35.601Z
Hash: c33a9b977eea03abeb23
Time: 12580ms
chunk {0} runtime-es5.741402d1d47331ce975c.js (runtime) 1.41 kB [entry] [rendered]
chunk {1} main-es5.4af9b61479361f268d39.js (main) 128 bytes [initial] [rendered]
chunk {2} polyfills-es5.6d628ad86fb70cb4472e.js (polyfills) 68.1 kB [initial] [rendered]

ERROR in node_modules/codelyzer/angular/styles/cssAst.d.ts(38,9): error TS1086: An accessor cannot be declared in an ambient context.
node_modules/codelyzer/angular/styles/cssAst.d.ts(39,9): error TS1086: An accessor cannot be declared in an ambient context.
node_modules/codelyzer/util/function.d.ts(21,9): error TS1086: An accessor cannot be declared in an ambient context.
node_modules/codelyzer/util/function.d.ts(22,9): error TS1086: An accessor cannot be declared in an ambient context.

ERROR: Service 'user_interface' failed to build: The command '/bin/sh -c $(npm bin)/ng build --prod --build-optimizer' returned a non-zero code: 1

Reffered -
postcss/autoprefixer#1184
Cezerin2/Cezerin2#55
https://stackoverflow.com/questions/55271798/browserslist-caniuse-lite-is-outdated-please-run-next-command-npm-update-cani
but couldn't resolve the issue. Any suggestions and help will be really appreciated.
thanks in advance.

Training job crash if classes are digits

Hi !
I am working on a use case where the object to detect is a serial number engraved on a part.
To achieve that, I've annotated serial number's digits (from 0 to 9). But, when I start the training, the job crashes.
If I add any letter after the digit (1A instead of 1), the training succeeds.
Thanks for the help

Couldn't choose Type of Labels sometimes

Around 70% of the times I'm unable to choose pascal or json labels, because second box on first page is broken. Didn't realize yet how to reproduce this bug and how to deal with it, but will update this issue as soon as I understand it.

Screenshot from 2020-03-15 21-31-40

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.