corl-team / rebased
Official implementation of the paper "Linear Transformers with Learnable Kernel Functions are Better In-Context Models"
☆159Updated last month
Alternatives and similar repositories for rebased:
Users that are interested in rebased are comparing it to the libraries listed below
- ☆71Updated 5 months ago
- σ-GPT: A New Approach to Autoregressive Models☆61Updated 6 months ago
- PyTorch implementation of models from the Zamba2 series.☆176Updated 3 weeks ago
- ☆132Updated this week
- Effective LLM Alignment Toolkit☆113Updated last week
- Focused on fast experimentation and simplicity☆65Updated last month
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆117Updated 5 months ago
- A benchmark for role-playing language models☆81Updated this week
- Understand and test language model architectures on synthetic tasks.☆181Updated last month
- ☆20Updated 6 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆114Updated 2 months ago
- ☆53Updated last year
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆215Updated 2 weeks ago
- Muon optimizer: +~30% sample efficiency with <3% wallclock overhead☆251Updated last week
- 2D Positional Embeddings for Webpage Structural Understanding 🦙👀☆92Updated 5 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆184Updated 2 months ago
- ☆36Updated 2 weeks ago
- supporting pytorch FSDP for optimizers☆76Updated 2 months ago
- ☆39Updated last month
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆94Updated 2 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆95Updated last month
- ☆31Updated 4 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆192Updated 6 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 4 months ago
- ☆49Updated 11 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆294Updated 2 months ago
- Token Omission Via Attention☆122Updated 4 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆121Updated 9 months ago