For model training, we harnessed the power of PyTorch Lightning to fine-tune our model using the Foodset 101 dataset, which is available through TorchVision.
Our model is built upon the VisionTransformer architecture, known for its excellent performance in multi label image recognition tasks.
We conducted training over a span of 10 epochs, optimizing the model to achieve results through distributed data parallelism. To explore the training progress, metrics, and insights, please visit the W&B dashboard.
To setup this repo locally, create a virtual environment (e.g. with PyEnv):
brew install pyenv
pyenv init
pyenv install -s 3.10.10
pyenv virtualenv 3.10.10 foodformer
pyenv activate foodformer
pyenv local foodformer
then install the dependencies and pre-commit hooks:
pip install -r requirements.txt
pre-commit install
You can access the demo of the application at here
You can use API platforms like Postman or Insomnia, the command-line tool curl
.
- for the healthcheck endpoint:
curl http://localhost:8080
- for a post endpoint called
predict
:
curl -X 'POST' \
'http://35.88.15.190//predict' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F '[email protected];type=image/jpeg'
For load testing via locust, follow the instructions.
cd load_testing
docker build --no-cache -t my-image:latest .
docker run -p 8890:8089 my-image:latest
Set the load testing parameters and take required screenshots. Screenshots from my test runs are as follows:-
Other screenshots can be found here
You can access the Grafana dashboard snapshot here.
Explore valuable metrics and insights from our project on the dashboard.