SamsungSAILMontreal / ByteCraftLinks
☆27Updated last month
Alternatives and similar repositories for ByteCraft
Users that are interested in ByteCraft are comparing it to the libraries listed below
Sorting:
- Nexusflow function call, tool use, and agent benchmarks.☆19Updated 5 months ago
- implementation of https://arxiv.org/pdf/2312.09299☆20Updated 11 months ago
- ☆48Updated 10 months ago
- Training hybrid models for dummies.☆21Updated 4 months ago
- Latent Large Language Models☆18Updated 9 months ago
- ☆19Updated 2 months ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆68Updated 3 months ago
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆25Updated 6 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated last week
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 4 months ago
- ☆38Updated 10 months ago
- Rust bindings for CTranslate2☆14Updated last year
- ☆9Updated last month
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 6 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Public Goods Game (PGG) Benchmark: Contribute & Punish is a multi-agent benchmark that tests cooperative and self-interested strategies a…☆36Updated last month
- LLMs as Collaboratively Edited Knowledge Bases☆45Updated last year
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated 3 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month
- Data preparation code for CrystalCoder 7B LLM☆44Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆74Updated last week
- GoldFinch and other hybrid transformer components☆10Updated 3 weeks ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆90Updated 4 months ago
- A fast, local, and secure approach for training LLMs for coding tasks using GRPO with WebAssembly and interpreter feedback.☆24Updated 2 months ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆50Updated this week
- Approximating the joint distribution of language models via MCTS☆21Updated 7 months ago
- alternative way to calculating self attention☆18Updated last year
- Lego for GRPO☆28Updated last week
- ☆25Updated last month
- ☆36Updated 2 years ago