Amplify-Partners / annotation-reading-listLinks
A reading list of relevant papers and projects on foundation model annotation
☆28Updated 9 months ago
Alternatives and similar repositories for annotation-reading-list
Users that are interested in annotation-reading-list are comparing it to the libraries listed below
Sorting:
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆112Updated 2 months ago
- Storing long contexts in tiny caches with self-study☆218Updated this week
- LLM training in simple, raw C/CUDA☆15Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆99Updated 4 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 8 months ago
- ☆40Updated last year
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆62Updated 2 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆72Updated 7 months ago
- ☆119Updated last month
- ☆144Updated 3 months ago
- ☆107Updated last week
- Training-Ready RL Environments + Evals☆185Updated this week
- ☆136Updated 8 months ago
- A puzzle to learn about prompting☆135Updated 2 years ago
- Open source interpretability artefacts for R1.☆164Updated 7 months ago
- Long context evaluation for large language models☆224Updated 9 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 9 months ago
- Train your own SOTA deductive reasoning model☆107Updated 9 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆60Updated 7 months ago
- Collection of LLM completions for reasoning-gym task datasets☆30Updated 5 months ago
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆55Updated 5 months ago
- rl from zero pretrain, can it be done? yes.☆282Updated 2 months ago
- ☆87Updated this week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 10 months ago
- NSA Triton Kernels written with GPT5 and Opus 4.1☆66Updated 4 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆148Updated 2 months ago
- Simple repository for training small reasoning models☆47Updated 10 months ago
- ☆31Updated last year
- Commit0: Library Generation from Scratch☆173Updated 7 months ago
- smolLM with Entropix sampler on pytorch☆149Updated last year