google-deepmind / alta
☆21Updated 4 months ago
Alternatives and similar repositories for alta:
Users that are interested in alta are comparing it to the libraries listed below
- Minimum Description Length probing for neural network representations☆19Updated last month
- Using FlexAttention to compute attention with different masking patterns☆42Updated 5 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆40Updated 8 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆25Updated 10 months ago
- ☆29Updated last month
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 5 months ago
- Mamba support for transformer lens☆14Updated 5 months ago
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆40Updated last year
- The repository contains code for Adaptive Data Optimization☆20Updated 2 months ago
- Efficient Scaling laws and collaborative pretraining.☆15Updated last month
- NeurIPS 2024 tutorial on LLM Inference☆39Updated 2 months ago
- Learn online intrinsic rewards from LLM feedback☆34Updated 2 months ago
- ☆73Updated 6 months ago
- [ICLR 2025] "Training LMs on Synthetic Edit Sequences Improves Code Synthesis" (Piterbarg, Pinto, Fergus)☆15Updated 3 weeks ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆52Updated 11 months ago
- ☆53Updated last year
- Stick-breaking attention☆45Updated last month
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 2 months ago
- Evaluation of neuro-symbolic engines☆34Updated 7 months ago
- Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers…☆24Updated last year
- ☆18Updated 9 months ago
- ☆59Updated 10 months ago
- Efficient Dictionary Learning with Switch Sparse Autoencoders (SAEs)☆21Updated 3 months ago
- ☆21Updated 2 months ago
- Can Language Models Solve Olympiad Programming?☆110Updated last month
- ☆49Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆69Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆123Updated 3 months ago
- ☆45Updated 11 months ago