Name: Guglielmo Camporese
Type: User
Company: Disney Research Studios
Bio: Researcher at Disney Research (Zurich) - Prev. Ph.D. at UniPD (Italy) & Applied Scientist Intern at Amazon Science (AWS AI Labs & Alexa AI).
Twitter: gucamporese
Location: Zurich, Switzerland
Blog: guglielmocamporese.github.io
Guglielmo Camporese's Projects
Study and implementation of a GAN with Goodfellow鈥檚 approach.
A curated list of papers and resources linked to action anticipation and early action recognition from videos.
Code for the Top-1 submission of contest of VCS AY 2020-2021, the Vision and Cognitive Service class, University of Padova, Italy.
Open AI Cartpole Solved in TensorFlow
Code for the Paper: "Conditional Variational Capsule Network for Open Set Recognition", Y. Guo, G. Camporese, W. Yang, A. Sperduti, L. Ballan, ICCV, 2021.
What can we do with Vector Quantization on Deep Nets?
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
Simple PyTorch Dataset for the EPIC-Kitchens-55 and EPIC-Kitchens-100 that handles frames and features (rgb, optical flow, and objects) for the Action Recognition and the Action Anticipation Tasks!
kaggle challenge on GANs for generating dog images
Minimal GLOM implementation in PyTorch.
Benchmarks for testing GPUs.
A repo for training and finetuning models for hands segmentation.
Kaggle Competition
Targeted dropout implemented in Keras
Download DeepMind's Kinetics dataset.
In this work I investigate the speech command task developing and analyzing deep learning models. The state of the art technology uses convolutional neural networks (CNN) because of their intrinsic nature of learning correlated represen- tations as is the speech. In particular I develop different CNNs trained on the Google Speech Command Dataset and tested on different scenarios. A main problem on speech recognition consists in the differences on pronunciations of words among different people: one way of building an invariant model to variability is to augment the dataset perturbing the input. In this work I study two kind of augmentations: the Vocal Tract Length Perturbation (VTLP) and the Synchronous Overlap and Add (SOLA) that locally perturb the input in frequency and time respectively. The models trained on augmented data outperforms in accuracy, precision and recall all the models trained on the normal dataset. Also the design of CNNs has impact on learning invariances: the inception CNN architecture in fact helps on learning features that are invariant to speech variability using different kind of kernel sizes for convolution. Intuitively this is because of the implicit capability of the model on detecting different speech pattern lengths in the audio feature.
Simple and easy to use python BOT for the COVID registration booking system of the math department @ unipd (torre archimede). This API creates an interface with the official website, with more useful functionalities.
PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057
Official code of "Where are my Neighbors? Exploiting Patches Relations in Self-Supervised Vision Transformer", Guglielmo Camporese, Elena Izzo, Lamberto Ballan. BMVC, 2022.
Code for the Paper: Antonino Furnari and Giovanni Maria Farinella. What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention. International Conference on Computer Vision, 2019.
PyTorch implementation of SimSiam https//arxiv.org/abs/2011.10566
This MATLAB code is created to simulated a Rayleigh and Rician wireless channel based on the sum of sinusoids model proposed by Jakes and whit a filtering approach. In this code there are also the evaluation based on the comparison of the second order statistics and LCR/AFD. All this work is the implementation of the papers:
Baseline code for TinyAction Challenge
You like pytorch? You like micrograd? You love tinygrad! 鉂わ笍
Useful scripts/commands/settings I mostly use for research/work.