swiss-ai / Megatron-LMLinks
Ongoing research training transformer models at scale
☆32Updated this week
Alternatives and similar repositories for Megatron-LM
Users that are interested in Megatron-LM are comparing it to the libraries listed below
Sorting:
- ☆54Updated 11 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆67Updated 11 months ago
- ☆49Updated 8 months ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- entropix style sampling + GUI☆27Updated 11 months ago
- ☆40Updated 9 months ago
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- Python library to use Pleias-RAG models☆63Updated 5 months ago
- ☆51Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- ☆31Updated last year
- Data preparation code for Amber 7B LLM☆93Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Nexusflow function call, tool use, and agent benchmarks.☆29Updated 9 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Models☆111Updated 6 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 8 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆237Updated last week
- Aana SDK is a powerful framework for building AI enabled multimodal applications.☆52Updated last month
- ☆67Updated last year
- Verifiers for LLM Reinforcement Learning☆74Updated 5 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆66Updated last year
- Open Implementations of LLM Analyses☆107Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆72Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 8 months ago
- ☆43Updated 3 weeks ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 8 months ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 10 months ago
- FMS Model Optimizer is a framework for developing reduced precision neural network models.☆20Updated this week
- ☆136Updated last month