Awesome list for production on CLIP (Contrastive Language-Image Pre-Training).
-
Convert model to OpenVINO Intermediate Representation (IR) format
-
PyTorch - EXPORTING A MODEL FROM PYTORCH TO ONNX AND RUNNING IT USING ONNX RUNTIME
-
https://huggingface.co/laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K
-
https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K
-
https://github.com/facebookresearch/MetaCLIP#pre-trained-models
-
https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark.png (Commits on Oct 17, 2022)
-
https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv (Commits on Oct 22, 2023)
-
https://github.com/huggingface/pytorch-image-models/tree/main/results (Commits on May 25, 2023)
-
Learning Transferable Visual Models From Natural Language Supervision [Submitted on 26 Feb 2021]
-
Reproducible scaling laws for contrastive language-image learning [Submitted on 14 Dec 2022]
[Submitted on 27 Mar 2023]
[Submitted on 14 Apr 2023]
[Submitted on 3 Nov 2021]
[Submitted on 16 Oct 2022]
[Submitted on 27 Apr 2023 (v1), last revised 25 Jul 2023 (this version, v4)]