☆46Oct 11, 2023Updated 2 years ago
Alternatives and similar repositories for transformer_in_transformer
Users that are interested in transformer_in_transformer are comparing it to the libraries listed below
Sorting:
- ☆52Jun 10, 2024Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆40Jun 10, 2024Updated last year
- Code for the paper "Distinguishing the Knowable from the Unknowable with Language Models"☆11Apr 15, 2024Updated last year
- ☆12Jul 4, 2024Updated last year
- code associated with paper "Sparse Bayesian Optimization"☆26Oct 31, 2023Updated 2 years ago
- HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation [ACL 2023]☆14Jul 11, 2023Updated 2 years ago
- Code for "What really matters in matrix-whitening optimizers?"☆22Oct 31, 2025Updated 4 months ago
- Code for T-MARS data filtering☆35Aug 23, 2023Updated 2 years ago
- Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.☆61Jan 30, 2025Updated last year
- ☆118Feb 11, 2025Updated last year
- ☆12Nov 15, 2022Updated 3 years ago
- All-in-one repository for Fine-tuning & Pretraining (Large) Language Models☆15Mar 8, 2023Updated 3 years ago
- Universal Neurons in GPT2 Language Models☆30May 28, 2024Updated last year
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆57Mar 10, 2025Updated last year
- T-Projection is a method to perform high-quality Annotation Projection of Sequence Labeling datasets.☆13Nov 21, 2023Updated 2 years ago
- ☆12Nov 22, 2024Updated last year
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Aug 6, 2023Updated 2 years ago
- Resources related to EMNLP 2021 paper "FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input Representations"☆13Dec 14, 2021Updated 4 years ago
- 🧮 Algebraic Positional Encodings.☆18Aug 20, 2025Updated 6 months ago
- ☆13Nov 13, 2020Updated 5 years ago
- ☆17Dec 19, 2024Updated last year
- ☆16Sep 27, 2023Updated 2 years ago
- JAX/Flax implementation of the Hyena Hierarchy☆34Apr 27, 2023Updated 2 years ago
- ☆34Feb 12, 2025Updated last year
- ☆15Jul 24, 2022Updated 3 years ago
- ☆19Jun 10, 2024Updated last year
- ☆14Jul 11, 2022Updated 3 years ago
- "PyTorch in Rust"☆17Feb 13, 2024Updated 2 years ago
- Personal website☆16Feb 20, 2026Updated 2 weeks ago
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,151Jan 11, 2024Updated 2 years ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Jun 22, 2023Updated 2 years ago
- Repository for reproducing `Model-Based Robust Deep Learning`☆16Jan 22, 2021Updated 5 years ago
- ☆20May 5, 2023Updated 2 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆71Sep 25, 2024Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Aug 30, 2023Updated 2 years ago
- ☆33Apr 12, 2021Updated 4 years ago
- ☆16Apr 21, 2022Updated 3 years ago
- ☆18Mar 6, 2024Updated 2 years ago
- MetA-Train to Explain☆18Feb 15, 2022Updated 4 years ago