srush / prof8
Experimental paper writing linter.
☆34Updated 4 months ago
Alternatives and similar repositories for prof8:
Users that are interested in prof8 are comparing it to the libraries listed below
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated last month
- ☆29Updated 3 months ago
- Using FlexAttention to compute attention with different masking patterns☆40Updated 4 months ago
- Utilities for efficient fine-tuning, inference and evaluation of code generation models☆21Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆67Updated 3 months ago
- Personal solutions to the Triton Puzzles☆18Updated 6 months ago
- ☆31Updated 9 months ago
- Here we will test various linear attention designs.☆58Updated 9 months ago
- PyTorch implementation for "Long Horizon Temperature Scaling", ICML 2023☆20Updated last year
- ☆37Updated 9 months ago
- Minimum Description Length probing for neural network representations☆18Updated this week
- JORA: JAX Tensor-Parallel LoRA Library (ACL 2024)☆32Updated 9 months ago
- ☆22Updated 4 months ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆21Updated 5 months ago
- Efficient Scaling laws and collaborative pretraining.☆13Updated this week
- Source-to-Source Debuggable Derivatives in Pure Python☆15Updated last year
- ☆30Updated 11 months ago
- Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks"☆17Updated 3 weeks ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆59Updated 4 months ago
- NeurIPS 2024 tutorial on LLM Inference☆38Updated last month
- Source code for the paper "Positional Attention: Out-of-Distribution Generalization and Expressivity for Neural Algorithmic Reasoning"☆14Updated 2 weeks ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆25Updated 9 months ago
- ☆50Updated 3 months ago
- ☆48Updated last year
- ☆25Updated last year
- ☆70Updated 5 months ago
- Stick-breaking attention☆41Updated 2 weeks ago
- ☆32Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆52Updated 5 months ago
- DPO, but faster 🚀☆29Updated last month