sjelassi / transformers_ssm_copy
☆29Updated 10 months ago
Alternatives and similar repositories for transformers_ssm_copy:
Users that are interested in transformers_ssm_copy are comparing it to the libraries listed below
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆25Updated 9 months ago
- Stick-breaking attention☆41Updated this week
- ☆51Updated 7 months ago
- ☆46Updated 11 months ago
- ☆46Updated 6 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated 7 months ago
- Using FlexAttention to compute attention with different masking patterns☆40Updated 3 months ago
- ☆18Updated 7 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆66Updated 2 months ago
- Here we will test various linear attention designs.☆58Updated 8 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆52Updated 4 months ago
- Long Context Extension and Generalization in LLMs☆40Updated 3 months ago
- ☆43Updated 5 months ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆21Updated 4 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆23Updated 4 months ago
- ☆44Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆23Updated 6 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆42Updated last month
- PyTorch building blocks for OLMo☆47Updated this week
- ☆69Updated 4 months ago
- Official Implementation Of The Paper: `DeciMamba: Exploring the Length Extrapolation Potential of Mamba'☆22Updated 5 months ago
- ☆78Updated 10 months ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆22Updated last month
- ☆32Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated last year
- Codebase for Instruction Following without Instruction Tuning☆33Updated 3 months ago
- ☆24Updated 3 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆41Updated 5 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆66Updated 9 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆11Updated 2 months ago