srush / GPTWorld
A puzzle to learn about prompting
☆123Updated last year
Alternatives and similar repositories for GPTWorld:
Users that are interested in GPTWorld are comparing it to the libraries listed below
- Puzzles for exploring transformers☆331Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆91Updated 2 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆121Updated 9 months ago
- Fast bare-bones BPE for modern tokenizer training☆142Updated 3 months ago
- Textbook on reinforcement learning from human feedback☆154Updated this week
- Understand and test language model architectures on synthetic tasks.☆177Updated 2 weeks ago
- ☆203Updated 6 months ago
- Simple Transformer in Jax☆130Updated 7 months ago
- ☆164Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- ☆117Updated last week
- Website for hosting the Open Foundation Models Cheat Sheet.☆263Updated 7 months ago
- JAX implementation of the Llama 2 model☆213Updated 11 months ago
- seqax = sequence modeling + JAX☆136Updated 6 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆183Updated 8 months ago
- Long context evaluation for large language models☆198Updated this week
- Scaling Data-Constrained Language Models☆330Updated 4 months ago
- Extract full next-token probabilities via language model APIs☆229Updated 11 months ago
- An interactive exploration of Transformer programming.☆256Updated last year
- ☆413Updated 3 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆158Updated 2 weeks ago
- A MAD laboratory to improve AI architecture designs 🧪☆102Updated last month
- Manage scalable open LLM inference endpoints in Slurm clusters☆249Updated 6 months ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆53Updated 2 months ago
- ☆75Updated 6 months ago
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆137Updated 3 weeks ago
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆102Updated last year
- Erasing concepts from neural representations with provable guarantees☆221Updated this week
- Solve puzzles. Learn CUDA.☆61Updated last year