facebookresearch / GCDLinks
Computing the greatest common divisor with transformers, source code for the paper https//arxiv.org/abs/2308.15594
☆14Updated 4 months ago
Alternatives and similar repositories for GCD
Users that are interested in GCD are comparing it to the libraries listed below
Sorting:
- Code for the paper "Function-Space Learning Rates"☆23Updated 6 months ago
- ☆34Updated last year
- Source-to-Source Debuggable Derivatives in Pure Python☆15Updated last year
- ☆33Updated last year
- ☆18Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆31Updated 9 months ago
- ☆15Updated last month
- ☆11Updated 7 months ago
- ☆16Updated last year
- train with kittens!☆63Updated last year
- Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers…☆29Updated 2 years ago
- Code and data for paper "(How) do Language Models Track State?"☆20Updated 8 months ago
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆27Updated 3 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year
- Implementation of Hyena Hierarchy in JAX☆10Updated 2 years ago
- ☆35Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated last year
- JAX implementation of "Fine-Tuning Language Models with Just Forward Passes"☆19Updated 2 years ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆79Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- Minimum Description Length probing for neural network representations☆20Updated 10 months ago
- Clean RL implementation using MLX☆33Updated last year
- Official Project Page for HLA: Higher-order Linear Attention (https://arxiv.org/abs/2510.27258)☆36Updated 3 weeks ago
- Official code for the paper "Attention as a Hypernetwork"☆46Updated last year
- Causal Analysis of Agent Behavior for AI Safety☆19Updated 2 years ago
- Efficient Scaling laws and collaborative pretraining.☆18Updated 2 months ago
- A repository for research on medium sized language models.☆78Updated last year