pyember / emberLinks
☆187Updated 2 weeks ago
Alternatives and similar repositories for ember
Users that are interested in ember are comparing it to the libraries listed below
Sorting:
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆173Updated 4 months ago
- ☆134Updated 3 months ago
- ☆128Updated 3 months ago
- ☆90Updated this week
- ☆41Updated 5 months ago
- Long context evaluation for large language models☆220Updated 4 months ago
- Inference-time scaling for LLMs-as-a-judge.☆250Updated last week
- prime-rl is a codebase for decentralized async RL training at scale☆368Updated this week
- Open source interpretability artefacts for R1.☆154Updated 2 months ago
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆535Updated this week
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆73Updated last week
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆128Updated last year
- A framework for optimizing DSPy programs with RL☆89Updated this week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 6 months ago
- PyTorch Single Controller☆318Updated this week
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆96Updated last month
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆207Updated this week
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆91Updated last month
- ☆259Updated last week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆156Updated this week
- ☆115Updated 6 months ago
- Train your own SOTA deductive reasoning model☆96Updated 4 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆112Updated this week
- Storing long contexts in tiny caches with self-study☆85Updated 3 weeks ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆314Updated 8 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆206Updated last month
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆211Updated this week
- A benchmark for LLMs on complicated tasks in the terminal☆240Updated this week
- XTR/WARP (SIGIR'25) is an extremely fast and accurate retrieval engine based on Stanford's ColBERTv2/PLAID and Google DeepMind's XTR.☆137Updated 2 months ago
- code for training & evaluating Contextual Document Embedding models☆194Updated 2 months ago