r-three / git-theta
git extension for {collaborative, communal, continual} model development
☆207Updated 3 months ago
Alternatives and similar repositories for git-theta:
Users that are interested in git-theta are comparing it to the libraries listed below
- Erasing concepts from neural representations with provable guarantees☆222Updated 3 weeks ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆255Updated last year
- Extract full next-token probabilities via language model APIs☆228Updated 11 months ago
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆202Updated last month
- Scaling Data-Constrained Language Models☆333Updated 4 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆78Updated 2 years ago
- ☆65Updated 2 years ago
- A puzzle to learn about prompting☆124Updated last year
- Fast bare-bones BPE for modern tokenizer training☆146Updated 3 months ago
- Understand and test language model architectures on synthetic tasks.☆181Updated last month
- ☆92Updated last year
- An interactive exploration of Transformer programming.☆258Updated last year
- Website for hosting the Open Foundation Models Cheat Sheet.☆262Updated 7 months ago
- ☆299Updated 7 months ago
- ☆121Updated last week
- Manage scalable open LLM inference endpoints in Slurm clusters☆252Updated 7 months ago
- JAX implementation of the Llama 2 model☆215Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆102Updated 2 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆542Updated this week
- Seminar on Large Language Models (COMP790-101 at UNC Chapel Hill, Fall 2022)☆310Updated 2 years ago
- ☆164Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆184Updated 6 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆81Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆95Updated 3 months ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆195Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆182Updated 2 months ago
- batched loras☆338Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆122Updated 10 months ago
- ☆208Updated 7 months ago
- A repository for log-time feedforward networks☆219Updated 10 months ago