๐ค Jules Belveze โฃโโ ๐ฆ Open Source โ โฃโโ tsa - Dual-attention autoencoder โ โฃโโ bert-squeeze - Speed up Transformer models โ โฃโโ bundler - Learn from your data โ โฃโโ nhelper - Behavioral testing โ โโโ time-series-dataset - Dataset utilities โฃโโ ๐ Contributions โ โฃโโ ๐ค Hugging Face Ecosystem โ โ โฃโโ t5-small-headline-generation - t5 for headline generation โ โ โโโ tldr_news - Summarization dataset โ โฃโโ โ๏ธ John Snow Labs Ecosystem โ โ โโโ langtest - Deliver safe & effective NLP models โ โฃโโ ๐งน Dust โ โ โโโ Dust - Customizable and secure AI assistants. โ โฃโโ ๐ซ SpaCy Ecosystem โ โ โโโ concepCy - SpaCy wrapper for ConceptNet โ โฃโโ bulk - contributed the color feature โ โโโ FastBERT - contributed the batching inference โโโ ๐ Blogs & Papers โฃโโ Atlastic Reputation AI: Four Years of Advancing and Applying a SOTA NLP Classifier โฃโโ Real-World MLOps Examples: Model Development in Hypefactors โฃโโ LangTest: Unveiling & Fixing Biases with End-to-End NLP Pipelines โฃโโ Case Study: MLOps for NLP-powered Media Intelligence using Metaflow โฃโโ Scaling Machine Learning Experiments With neptune.ai and Kubernetes โโโ Scaling-up PyTorch inference: Serving billions of daily NLP inferences with ONNX Runtime
I currently work as a Software Engineer at @Dust.
My previous experiences include leading AI developments and setting up entire AI infrastructures at Ava, as well as spearheading MLOps and NLP projects at John Snow Labs. I have engineered multilingual NLP solutions at Hypefactors and conducted deep learning research at Microsoft.
I believe that automating model development and deployment using MLOps enables faster feature releases. To achieve this goal, I have worked with various tools such as PyTorch Lightning, FastAPI, HuggingFace, Kubernetes, ONNXruntime, and more.
Apart from this, I have worked extensively with Deep Learning and Time Series, completing my Master's Thesis on Anomaly Detection in High Dimensional Time Series. Additionally, I am keenly interested in exploring state-of-the-art techniques to speed up the inference of Deep Learning models, especially Transformer-based models.
I am an avid open source contributor and advocate for ethical AI practices.