NVIDIA / NeMo-Curator
Scalable data pre processing and curation toolkit for LLMs
☆615Updated this week
Related projects ⓘ
Alternatives and complementary repositories for NeMo-Curator
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆811Updated this week
- Scalable toolkit for efficient model alignment☆620Updated this week
- Minimalistic large language model 3D-parallelism training☆1,260Updated this week
- ☆451Updated 3 weeks ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆1,634Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,045Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆685Updated this week
- Evaluate your LLM's response with Prometheus and GPT4 💯☆797Updated 2 months ago
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,095Updated last week
- An Open Source Toolkit For LLM Distillation☆356Updated 2 months ago
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆803Updated 3 months ago
- Official repository for ORPO☆421Updated 5 months ago
- LLMPerf is a library for validating and benchmarking LLMs☆645Updated 3 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆647Updated last month
- Serving multiple LoRA finetuned LLM as one☆984Updated 6 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆283Updated last week
- awesome synthetic (text) datasets☆242Updated 3 weeks ago
- Framework for enhancing LLMs for RAG tasks using fine-tuning.☆504Updated this week
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆293Updated 11 months ago
- Generative Representational Instruction Tuning☆567Updated this week
- OLMoE: Open Mixture-of-Experts Language Models☆460Updated this week
- DataComp for Language Models☆1,157Updated this week
- ReFT: Representation Finetuning for Language Models☆1,159Updated 2 weeks ago
- [NeurIPS'24 Spotlight] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces in…☆791Updated this week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆229Updated 3 weeks ago
- Best practices for distilling large language models.☆397Updated 9 months ago
- Code for Husky, an open-source language agent that solves complex, multi-step reasoning tasks. Husky v1 addresses numerical, tabular and …☆328Updated 5 months ago
- ☆641Updated this week
- This project showcases an LLMOps pipeline that fine-tunes a small-size LLM model to prepare for the outage of the service LLM.☆289Updated 2 months ago