berlino / seq_iclLinks
☆53Updated last year
Alternatives and similar repositories for seq_icl
Users that are interested in seq_icl are comparing it to the libraries listed below
Sorting:
- ☆48Updated last year
- Experiments on the impact of depth in transformers and SSMs.☆31Updated 7 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆68Updated 10 months ago
- Stick-breaking attention☆57Updated last week
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆74Updated 7 months ago
- ☆32Updated last year
- Minimal but scalable implementation of large language models in JAX☆35Updated 7 months ago
- Universal Neurons in GPT2 Language Models☆29Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆120Updated 6 months ago
- ☆79Updated 10 months ago
- Sparse Autoencoder Training Library☆52Updated last month
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆70Updated last week
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆54Updated last year
- Understand and test language model architectures on synthetic tasks.☆217Updated 2 weeks ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆76Updated last year
- ☆45Updated last year
- ☆37Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- nanoGPT-like codebase for LLM training☆98Updated last month
- ☆53Updated 8 months ago
- Blog post☆17Updated last year
- A toolkit for scaling law research ⚖☆49Updated 4 months ago
- ☆55Updated 11 months ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆57Updated last month
- ☆78Updated 11 months ago
- ☆28Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆134Updated this week
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 9 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year