Instruction-Tuning-with-GPT-4 / GPT-4-LLM
Instruction Tuning with GPT-4
☆4,281Updated last year
Alternatives and similar repositories for GPT-4-LLM:
Users that are interested in GPT-4-LLM are comparing it to the libraries listed below
- Aligning pretrained language models with instruction data generated by themselves.☆4,314Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,832Updated last year
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,707Updated last year
- ⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡☆2,937Updated last year
- LLaMA: Open and Efficient Foundation Language Models☆2,803Updated last year
- Let ChatGPT teach your own chatbot in hours with a single GPU!☆3,170Updated last year
- 4 bits quantization of LLaMA using GPTQ☆3,045Updated 8 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,766Updated this week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,601Updated last year
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,678Updated 3 months ago
- Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca☆4,151Updated 4 months ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,040Updated 6 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,322Updated 9 months ago
- Example models using DeepSpeed☆6,377Updated last week
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,617Updated last year
- LLM as a Chatbot Service☆3,308Updated last year
- An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.☆8,377Updated this week
- An open-source framework for training large multimodal models.☆3,857Updated 6 months ago
- The Official Python Client for Lamini's API☆2,525Updated this week
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆2,020Updated last year
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,676Updated last year
- Official implementation for "Multimodal Chain-of-Thought Reasoning in Language Models" (stay tuned and more will be updated)☆3,889Updated 9 months ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,358Updated 7 months ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,694Updated 7 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,459Updated 7 months ago
- GLM (General Language Model)☆3,230Updated last year
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,462Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆6,818Updated this week
- A large-scale 7B pretraining language model developed by BaiChuan-Inc.☆5,689Updated 8 months ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆3,888Updated this week