Code and documentation to train Stanford's Alpaca models, and generate the data.
-
Updated
Mar 12, 2024 - Python
Code and documentation to train Stanford's Alpaca models, and generate the data.
✨✨Latest Papers and Datasets on Multimodal Large Language Models, and Their Evaluation.
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
Must-read Papers on LLM Agents.
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
An Open-sourced Knowledgable Large Language Model Framework.
A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
PhoGPT: Generative Pre-training for Vietnamese (2023)
A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.
A collection of ChatGPT and GPT-3.5 instruction-based prompts for generating and classifying text.
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.
[NeurIPS'23] "MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing".
[ICLR 2024] Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models
Code for "Lion: Adversarial Distillation of Proprietary Large Language Models (EMNLP 2023)"
Finetune LLaMA-7B with Chinese instruction datasets
WangChanGLM 🐘 - The Multilingual Instruction-Following Model
EditWorld: Simulating World Dynamics for Instruction-Following Image Editing
Awesome Multimodal Assistant is a curated list of multimodal chatbots/conversational assistants that utilize various modes of interaction, such as text, speech, images, and videos, to provide a seamless and versatile user experience.
Code for "FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models (ACL 2024)"
Add a description, image, and links to the instruction-following topic page so that developers can more easily learn about it.
To associate your repository with the instruction-following topic, visit your repo's landing page and select "manage topics."