lorenzo famiglini's Projects
Here you can find some algorithms like Genetic Algorithm, Gradient Descent, Reinforcement Learning, non linear programming..
Simple Python wrapper around runTagger.sh of ark-tweet-nlp
A professionally curated list of awesome Conformal Prediction videos, tutorials, books, papers, PhD theses, articles and open-source libraries.
😎 Awesome list of tools and projects with the awesome LangChain framework
A curated list of awesome Machine Learning frameworks, libraries and software.
Calibration Framework for Machine Learning and Deep Learning
Python Framework to calibrate confidence estimates of classifiers like Neural Networks
Camoscio: An Italian instruction-tuned LLaMA
CIFAR10, CIFAR100 results with VGG16,Resnet50,WideResnet using pytorch-lightning
Code for binary segmentation of cloths
Making large AI models cheaper, faster and more accessible
Big Data, Data Cleaning, Data Integration (record linkage), MongoDb, Data Analysis (Python language)
Some Data Viz with ggplot2 library in R
Project and assignments for deep learning task
Deepchecks is a Python package for comprehensively validating your machine learning models and data with minimal effort. See our docs: https://docs.deepchecks.com
State-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc.
Example Repo for the Udemy Course "Deployment of Machine Learning Models"
Deep neural model for classification task
Code for the paper "Calibrating Deep Neural Networks using Focal Loss"
A Library for Uncertainty Quantification.
gpt4all: a chatbot trained on a massive collection of clean assistant data including code, stories and dialogue
Implementation of Reinforcement Learning from Human Feedback (RLHF)
The detection of irony and sarcasm is one of the most insidious challenges in the field of Natural Language Processing. Over the years, several techniques have been studied to analyze these rhetorical figures, trying to identify the elements that discriminate, in a significant way, what is sarcastic or ironic from what is not. Within this study, some models that are state of the art are analyzed. As far as Machine Learning is concerned, the most discriminating features such as part of speech, pragmatic particles and sentiment are studied. Subsequently, these models are optimized, comparing Bayesian optimization techniques and random search. Once, the best hyperparameters are identified, ensemble methods such as Bayesian Model Averaging (BMA) are exploited. In relation to Deep Learning, two main models are analyzed: DeepMoji, developed by MIT, and a model called Transformer Based, which exploits the generalization power of Roberta Transformer. As soon as these models are compared, the main goal is to identify a new system able to better capture the two rhetorical figures. To this end, two models composed of attention mechanisms are proposed, exploiting the principle of Transfer Learning, using Bert Tweet Model and DeepMoji Model as feature extractors. After identifying the various architectures, an ensemble method is applied on the set of approaches proposed, in order to identify the best combination of algorithms that can achieve satisfactory results.
The detection of irony and sarcasm is one of the most insidious challenges in the field of Natural Language Processing. Over the years, several techniques have been studied to analyze these rhetorical figures, trying to identify the elements that discriminate, in a significant way, what is sarcastic or ironic from what is not. Within this study, some models that are state of the art are analyzed. As far as Machine Learning is concerned, the most discriminating features such as part of speech, pragmatic particles and sentiment are studied. Subsequently, these models are optimized, comparing Bayesian optimization techniques and random search. Once, the best hyperparameters are identified, ensemble methods such as Bayesian Model Averaging (BMA) are exploited. In relation to Deep Learning, two main models are analyzed: DeepMoji, developed by MIT, and a model called Transformer Based, which exploits the generalization power of Roberta Transformer. As soon as these models are compared, the main goal is to identify a new system able to better capture the two rhetorical figures. To this end, two models composed of attention mechanisms are proposed, exploiting the principle of Transfer Learning, using Bert Tweet Model and DeepMoji Model as feature extractors. After identifying the various architectures, an ensemble method is applied on the set of approaches proposed, in order to identify the best combination of algorithms that can achieve satisfactory results. Frameworks used: Pytorch, TF 2.0, Scikit Learn, Scikit-Optimize, Transformers
Overview and tutorial of the LangChain Library
🐍