EleutherAI / lm-evaluation-harness
A framework for few-shot evaluation of language models.
☆7,848Updated this week
Alternatives and similar repositories for lm-evaluation-harness:
Users that are interested in lm-evaluation-harness are comparing it to the libraries listed below
- Tools for merging pretrained large language models.☆5,260Updated last week
- Robust recipes to align language models with human and AI preferences☆5,001Updated 3 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆6,697Updated this week
- Train transformer language models with reinforcement learning.☆11,782Updated this week
- PyTorch native post-training library☆4,856Updated this week
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆3,845Updated last month
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,250Updated 8 months ago
- Go ahead and axolotl questions☆8,648Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,702Updated last month
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆6,795Updated 7 months ago
- Large Language Model Text Generation Inference☆9,777Updated this week
- Fast and memory-efficient exact attention☆15,541Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆17,363Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆10,325Updated this week
- Aligning pretrained language models with instruction data generated by themselves.☆4,269Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,584Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,790Updated 2 months ago
- A quick guide (especially) for trending instruction finetuning datasets☆2,862Updated last year
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆5,603Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,220Updated 9 months ago
- ☆4,058Updated 8 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,807Updated 11 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)☆4,809Updated this week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,790Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,946Updated last month
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆2,911Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆38,475Updated this week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆11,592Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,743Updated last week
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,378Updated 2 months ago