Topic: instruction-tuning Goto Github
Some thing interesting about instruction-tuning
Some thing interesting about instruction-tuning
instruction-tuning,Reverse Instructions to generate instruction tuning data with corpus examples
User: akoksal
Home Page: https://arxiv.org/abs/2304.08460
instruction-tuning,Crosslingual Generalization through Multitask Finetuning
Organization: bigscience-workshop
Home Page: https://arxiv.org/abs/2211.01786
instruction-tuning,DrugAssist: A Large Language Model for Molecule Optimization
User: blazerye
Home Page: https://arxiv.org/abs/2401.10334
instruction-tuning,:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
User: bradyfu
instruction-tuning,Cambrian-1 is a family of multimodal LLMs with a vision-centric design.
Organization: cambrian-mllm
Home Page: https://cambrian-mllm.github.io/
instruction-tuning,Generative Representational Instruction Tuning
Organization: contextualai
Home Page: https://arxiv.org/abs/2402.09906
instruction-tuning,This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning"
User: daod
instruction-tuning,DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤
Organization: datadreamer-dev
Home Page: https://datadreamer.dev
instruction-tuning,总结Prompt&LLM论文,开源数据&模型,AIGC应用
User: dsxiangli
instruction-tuning,Open-source Self-Instruction Tuning Code LLM
Organization: fsoft-ai4code
instruction-tuning,DISC-FinLLM,中文金融大语言模型(LLM),旨在为用户提供金融场景下专业、智能、全面的金融咨询服务。DISC-FinLLM, a Chinese financial large language model (LLM) designed to provide users with professional, intelligent, and comprehensive financial consulting services in financial scenarios.
Organization: fudandisc
Home Page: https://fin.fudan-disc.com/
instruction-tuning,[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
User: haotian-liu
Home Page: https://llava.hliu.cc
instruction-tuning,Research Trends in LLM-guided Multimodal Learning.
User: henryhzy
instruction-tuning,Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
User: hiyouga
Home Page: https://arxiv.org/abs/2403.13372
instruction-tuning,[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
User: hkuds
Home Page: https://arxiv.org/abs/2310.13023
instruction-tuning,[KDD'2024] "UrbanGPT: Spatio-Temporal Large Language Models"
User: hkuds
Home Page: https://urban-gpt.github.io
instruction-tuning,Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
Organization: hkust-nlp
instruction-tuning,CIKM2023 Best Demo Paper Award. HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊
Organization: hugailab
Home Page: https://wjn1996.github.io/blogs/HugNLP/
instruction-tuning,Code for instruction-tuning Stable Diffusion.
Organization: huggingface
Home Page: https://huggingface.co/blog/instruction-tuning-sd
instruction-tuning, “百聆”是一个基于LLaMA的语言对齐增强的英语/中文大语言模型,具有优越的英语/中文能力,在多语言和通用任务等多项测试中取得ChatGPT 90%的性能。BayLing is an English/Chinese LLM equipped with advanced language alignment, showing superior capability in English/Chinese generation, instruction following and multi-turn interaction.
Organization: ictnlp
Home Page: http://nlp.ict.ac.cn/bayling
instruction-tuning,Instruction Tuning with GPT-4
User: instruction-tuning-with-gpt-4
Home Page: https://instruction-tuning-with-gpt-4.github.io/
instruction-tuning,InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Organization: internlm
instruction-tuning,🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
User: luodian
Home Page: https://otter-ntu.github.io/
instruction-tuning,MindSpore online courses: Step into LLM
Organization: mindspore-courses
instruction-tuning,(AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions
Organization: mlpc-ucsd
Home Page: https://arxiv.org/abs/2308.09936
instruction-tuning,A one-stop data processing system to make data higher-quality, juicier, and more digestible for (multimodal) LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大模型提供更高质量、更丰富、更易”消化“的数据!
Organization: modelscope
instruction-tuning,Code and models for NExT-GPT: Any-to-Any Multimodal Large Language Model
User: next-gpt
Home Page: https://next-gpt.github.io/
instruction-tuning,[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
Organization: nvlabs
Home Page: https://arxiv.org/abs/2402.09353
instruction-tuning,[CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Language 3D Assistant.
Organization: open3da
Home Page: https://ll3da.github.io/
instruction-tuning,[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
Organization: opengvlab
instruction-tuning,We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
User: phoebussi
instruction-tuning,Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
Organization: pku-yuangroup
Home Page: https://arxiv.org/pdf/2311.10122.pdf
instruction-tuning,[ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning
Organization: princeton-nlp
instruction-tuning,All available datasets for Instruction Tuning of Large Language Models
User: raunak-agarwal
instruction-tuning,Papers and Datasets on Instruction Tuning and Following. ✨✨✨
User: renzelou
Home Page: https://arxiv.org/abs/2303.10475
instruction-tuning,The official GitHub page for the survey paper "A Survey of Large Language Models".
Organization: rucaibox
Home Page: https://arxiv.org/abs/2303.18223
instruction-tuning,DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AI
Organization: salesforce
instruction-tuning,Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"
Organization: salt-nlp
Home Page: https://llavar.github.io/
instruction-tuning,Awesome LLM Papers and repos on very comprehensive topics.
User: shure-dev
Home Page: https://shorturl.at/bmuwC
instruction-tuning,Datasets collection and preprocessings framework for NLP extreme multitask learning
User: sileod
instruction-tuning,🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
User: wangrongsheng
Home Page: https://arxiv.org/abs/2312.14557
instruction-tuning,The ParroT framework to enhance and regulate the Translation Abilities during Chat based on open-sourced LLMs (e.g., LLaMA-7b, Bloomz-7b1-mt) and human written translation and evaluation data.
User: wxjiao
instruction-tuning,mPLUG-Owl: The Powerful Multi-modal Large Language Model Family
Organization: x-plug
Home Page: https://www.modelscope.cn/studios/damo/mPLUG-Owl
instruction-tuning,Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`
User: xiaoya-li
Home Page: https://arxiv.org/abs/2308.10792
instruction-tuning,A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
User: yaodongc
instruction-tuning,Aligning pretrained language models with instruction data generated by themselves.
User: yizhongw
instruction-tuning,A curated list of awesome instruction tuning datasets, models, papers and repositories.
User: zhilizju
instruction-tuning,Collection of training data management explorations for large language models
User: zigew
Home Page: https://arxiv.org/abs/2312.01700
instruction-tuning,[Paper][ACL 2024 Findings] Knowledgeable Preference Alignment for LLMs in Domain-specific Question Answering
Organization: zjukg
Home Page: https://arxiv.org/abs/2311.06503
instruction-tuning,An Open-sourced Knowledgable Large Language Model Framework.
Organization: zjunlp
Home Page: http://knowlm.zjukg.cn/
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.