Zjh-819 / LLMDataHub
A quick guide (especially) for trending instruction finetuning datasets
☆2,798Updated last year
Alternatives and similar repositories for LLMDataHub:
Users that are interested in LLMDataHub are comparing it to the libraries listed below
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,911Updated last week
- Tools for merging pretrained large language models.☆5,157Updated this week
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,644Updated 5 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,650Updated last week
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,625Updated last month
- Aligning pretrained language models with instruction data generated by themselves.☆4,248Updated last year
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,291Updated last year
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆2,346Updated 2 months ago
- ☆2,341Updated this week
- YaRN: Efficient Context Window Extension of Large Language Models☆1,405Updated 9 months ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆2,801Updated this week
- Robust recipes to align language models with human and AI preferences☆4,933Updated 2 months ago
- Reference implementation for DPO (Direct Preference Optimization)☆2,340Updated 5 months ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,349Updated last month
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,116Updated 10 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆6,568Updated this week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,781Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,677Updated 5 months ago
- A framework for few-shot evaluation of language models.☆7,576Updated this week
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆1,906Updated 2 weeks ago
- [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.☆2,050Updated last week
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,006Updated 10 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,687Updated 2 weeks ago
- Instruction Tuning with GPT-4☆4,259Updated last year
- An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)☆4,109Updated this week
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆1,841Updated last year
- Alpaca dataset from Stanford, cleaned and curated☆1,532Updated last year
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)☆1,099Updated last year
- MTEB: Massive Text Embedding Benchmark☆2,114Updated this week
- LOMO: LOw-Memory Optimization☆978Updated 6 months ago