AetherCortex / Llama-X
Open Academic Research on Improving LLaMA to SOTA LLM
☆1,618Updated last year
Alternatives and similar repositories for Llama-X:
Users that are interested in Llama-X are comparing it to the libraries listed below
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,687Updated last year
- [NIPS2023] RRHF & Wombat☆799Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,127Updated 11 months ago
- ☆903Updated 8 months ago
- Secrets of RLHF in Large Language Models Part I: PPO☆1,318Updated 11 months ago
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,059Updated 6 months ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,668Updated 6 months ago
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆2,006Updated last year
- ⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡☆2,932Updated last year
- 🩹Editing large language models within 10 seconds⚡☆1,310Updated last year
- Instruction Tuning with GPT-4☆4,266Updated last year
- LOMO: LOw-Memory Optimization☆980Updated 7 months ago
- ☆728Updated 8 months ago
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,004Updated last year
- ☆456Updated 8 months ago
- A modular RL library to fine-tune language models to human preferences☆2,274Updated 11 months ago
- ☆889Updated 6 months ago
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,013Updated 5 months ago
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)☆1,106Updated last year
- 4 bits quantization of LLaMA using GPTQ☆3,036Updated 7 months ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,269Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,031Updated 10 months ago
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆984Updated 9 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,681Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,584Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,646Updated 6 months ago
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆592Updated 3 weeks ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆575Updated 6 months ago
- A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.☆928Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,364Updated 11 months ago