NVIDIA / RULER
This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?
☆698Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for RULER
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆610Updated 5 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆402Updated 2 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆644Updated last month
- ☆493Updated 3 weeks ago
- ☆447Updated 2 weeks ago
- Customizable implementation of the self-instruct paper.☆1,019Updated 8 months ago
- ☆465Updated 2 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,341Updated 6 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,554Updated 2 months ago
- An Open Source Toolkit For LLM Distillation☆352Updated last month
- A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations☆732Updated this week
- A bagel, with everything.☆312Updated 7 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆1,612Updated this week
- Serving multiple LoRA finetuned LLM as one☆979Updated 6 months ago
- Evaluate your LLM's response with Prometheus and GPT4 💯☆795Updated 2 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,141Updated 3 weeks ago
- Official repository for "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient and high-quality s…☆480Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆788Updated this week
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆667Updated 7 months ago
- Chat Templates for 🤗 HuggingFace Large Language Models☆529Updated last week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆664Updated this week
- Optimizing inference proxy for LLMs☆1,342Updated this week
- Generative Representational Instruction Tuning☆562Updated this week
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆368Updated 4 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆553Updated 8 months ago
- Automatically evaluate your LLMs in Google Colab☆557Updated 6 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆697Updated last week
- [NeurIPS'24 Spotlight] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces in…☆776Updated this week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆435Updated 7 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆415Updated 10 months ago