LAION-AI / AIWLinks
Alice in Wonderland code base for experiments and raw experiments data
☆131Updated 2 months ago
Alternatives and similar repositories for AIW
Users that are interested in AIW are comparing it to the libraries listed below
Sorting:
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 9 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆72Updated 7 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 10 months ago
- ☆40Updated last year
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆109Updated 11 months ago
- Code for ExploreTom☆87Updated 4 months ago
- Pivotal Token Search☆131Updated 4 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 9 months ago
- a curated list of data for reasoning ai☆140Updated last year
- ☆105Updated 3 months ago
- Code repository for the c-BTM paper☆108Updated 2 years ago
- EvaByte: Efficient Byte-level Language Models at Scale☆110Updated 6 months ago
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆63Updated 11 months ago
- PyTorch implementation of models from the Zamba2 series.☆185Updated 9 months ago
- ☆69Updated last year
- ☆81Updated last week
- ☆143Updated 2 months ago
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆59Updated last month
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- ☆104Updated 10 months ago
- Simple GRPO scripts and configurations.☆59Updated 9 months ago
- accompanying material for sleep-time compute paper☆117Updated 6 months ago
- look how they massacred my boy☆63Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 8 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆327Updated last year
- Train your own SOTA deductive reasoning model☆107Updated 8 months ago
- Just a bunch of benchmark logs for different LLMs☆118Updated last year
- ☆55Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year