stanford-futuredata / Megatron-LMLinks
Ongoing research training transformer models at scale
☆38Updated last year
Alternatives and similar repositories for Megatron-LM
Users that are interested in Megatron-LM are comparing it to the libraries listed below
Sorting:
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- ☆116Updated 9 months ago
- ☆46Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- look how they massacred my boy☆63Updated 11 months ago
- An automated tool for discovering insights from research papaer corpora☆139Updated last year
- auto fine tune of models with synthetic data☆76Updated last year
- KMD is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricaci…☆24Updated last year
- ☆67Updated last year
- inference code for mixtral-8x7b-32kseqlen☆101Updated last year
- Simple examples using Argilla tools to build AI☆56Updated 10 months ago
- never forget anything again! combine AI and intelligent tooling for a local knowledge base to track catalogue, annotate, and plan for you…☆38Updated last year
- Cerule - A Tiny Mighty Vision Model☆67Updated last year
- A framework for orchestrating AI agents using a mermaid graph☆77Updated last year
- Using modal.com to process FineWeb-edu data☆20Updated 6 months ago
- ☆28Updated last year
- A seamless matchmaking application that is programmed with Cohere Command R+, Stanford NLP DSPy framework, Weaviate Vector store and Crew…☆59Updated last year
- ☆135Updated last year
- ☆121Updated last year
- Verbosity control for AI agents☆65Updated last year
- ☆86Updated last year
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆62Updated 11 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 8 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆110Updated 9 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 11 months ago
- ☆162Updated 2 months ago
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated 7 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 8 months ago
- clean up your LLM datasets☆114Updated 2 years ago
- ☆19Updated last year