NVIDIA / RULER
This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?
☆1,082Updated 3 months ago
Alternatives and similar repositories for RULER:
Users that are interested in RULER are comparing it to the libraries listed below
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆653Updated 11 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆472Updated 8 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,316Updated this week
- ☆529Updated 8 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,482Updated last year
- Optimizing inference proxy for LLMs☆2,210Updated this week
- An Open Source Toolkit For LLM Distillation☆594Updated last week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,671Updated last week
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆1,847Updated 8 months ago
- Customizable implementation of the self-instruct paper.☆1,044Updated last year
- Large-scale LLM inference engine☆1,413Updated this week
- ☆515Updated 5 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆720Updated 7 months ago
- ☆868Updated 7 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,500Updated this week
- Chat Templates for 🤗 HuggingFace Large Language Models☆656Updated 4 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆152Updated 11 months ago
- A library for advanced large language model reasoning☆2,116Updated last month
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆632Updated 2 weeks ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,005Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,246Updated 2 months ago
- Convert Compute And Books Into Instruct-Tuning Datasets! Makes: QA, RP, Classifiers.☆1,436Updated 2 months ago
- Minimalistic large language model 3D-parallelism training☆1,850Updated this week
- Serving multiple LoRA finetuned LLM as one☆1,058Updated last year
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆697Updated last month
- Evaluate your LLM's response with Prometheus and GPT4 💯☆934Updated 2 weeks ago
- ☆928Updated 3 months ago
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,796Updated 2 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆739Updated last month
- Recipes to scale inference-time compute of open models☆1,068Updated this week