jondurbin / bagelLinks
A bagel, with everything.
☆320Updated last year
Alternatives and similar repositories for bagel
Users that are interested in bagel are comparing it to the libraries listed below
Sorting:
- Multipack distributed sampler for fast padding-free training of LLMs☆188Updated 9 months ago
- batched loras☆342Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆421Updated last year
- Merge Transformers language models by use of gradient parameters.☆206Updated 9 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆257Updated 10 months ago
- Generate textbook-quality synthetic LLM pretraining data☆497Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆238Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated last year
- ☆534Updated 9 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆302Updated last year
- ☆517Updated 6 months ago
- Pre-training code for Amber 7B LLM☆166Updated last year
- ☆269Updated 2 years ago
- ☆309Updated 11 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆461Updated last year
- Official repository for LongChat and LongEval☆518Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆651Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆236Updated last year
- Experiments on speculative sampling with Llama models☆126Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆697Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆688Updated 9 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆123Updated last year
- Official repository for ORPO☆453Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆725Updated 8 months ago
- ☆412Updated last year
- Tune any FALCON in 4-bit☆466Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated last year
- Fast & more realistic evaluation of chat language models. Includes leaderboard.☆187Updated last year