pacman100 / LLM-WorkshopLinks
LLM Workshop by Sourab Mangrulkar
☆381Updated 11 months ago
Alternatives and similar repositories for LLM-Workshop
Users that are interested in LLM-Workshop are comparing it to the libraries listed below
Sorting:
- Official repository for ORPO☆453Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆111Updated 8 months ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆122Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- Best practices for distilling large language models.☆547Updated last year
- Scalable toolkit for efficient model alignment☆807Updated this week
- Generate textbook-quality synthetic LLM pretraining data☆498Updated last year
- Official PyTorch implementation of QA-LoRA☆137Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 6 months ago
- Generative Representational Instruction Tuning☆640Updated 2 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆301Updated last month
- ☆518Updated 6 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆462Updated last year
- A bagel, with everything.☆320Updated last year
- Fast & more realistic evaluation of chat language models. Includes leaderboard.☆187Updated last year
- Automatically evaluate your LLMs in Google Colab☆631Updated last year
- batched loras☆343Updated last year
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆353Updated 9 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆302Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆279Updated 3 months ago
- distributed trainer for LLMs☆575Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆258Updated 10 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆342Updated 6 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆851Updated last week
- A repository for research on medium sized language models.☆497Updated last month
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆697Updated last year
- awesome synthetic (text) datasets☆281Updated 7 months ago
- Fine-Tuning Embedding for RAG with Synthetic Data☆500Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆727Updated 8 months ago
- Let's build better datasets, together!☆259Updated 5 months ago