Amplify-Partners / annotation-reading-list
A reading list of relevant papers and projects on foundation model annotation
☆25Updated last month
Alternatives and similar repositories for annotation-reading-list:
Users that are interested in annotation-reading-list are comparing it to the libraries listed below
- ☆38Updated 8 months ago
- ☆67Updated last month
- LLM training in simple, raw C/CUDA☆14Updated 3 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆97Updated this week
- look how they massacred my boy☆63Updated 5 months ago
- ☆22Updated last year
- ☆124Updated this week
- gzip Predicts Data-dependent Scaling Laws☆34Updated 10 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆165Updated 3 weeks ago
- ☆20Updated 11 months ago
- ☆48Updated last year
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆37Updated last month
- Functional Benchmarks and the Reasoning Gap☆84Updated 5 months ago
- Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.☆26Updated 3 weeks ago
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆32Updated this week
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆52Updated last week
- Code and data for the paper "Why think step by step? Reasoning emerges from the locality of experience"☆59Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆48Updated last week
- ☆36Updated 2 months ago
- ☆27Updated 4 months ago
- PyTorch library for Active Fine-Tuning☆61Updated last month
- Small, simple agent task environments for training and evaluation☆18Updated 4 months ago
- Experiments for efforts to train a new and improved t5☆77Updated 11 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆91Updated 3 weeks ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 3 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated last month
- Train your own SOTA deductive reasoning model☆81Updated 3 weeks ago
- ☆13Updated 5 months ago
- Collection of LLM completions for reasoning-gym task datasets☆15Updated this week
- An introduction to LLM Sampling☆77Updated 3 months ago