PASTA: Post-hoc Attention Steering for LLMs
☆136Nov 24, 2024Updated last year
Alternatives and similar repositories for PASTA
Users that are interested in PASTA are comparing it to the libraries listed below
Sorting:
- ☆16Jul 23, 2024Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Oct 1, 2025Updated 5 months ago
- Implementation for the paper "Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning"☆11Jan 10, 2025Updated last year
- [ICLR 2023] CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆46Jun 9, 2025Updated 8 months ago
- Tree prompting: easy-to-use scikit-learn interface for improved prompting.☆42Oct 24, 2023Updated 2 years ago
- ☆14Jan 10, 2024Updated 2 years ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Feb 22, 2024Updated 2 years ago
- Linear Attention Sequence Parallelism (LASP)☆89Jun 4, 2024Updated last year
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆45Jun 30, 2024Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Nov 17, 2024Updated last year
- Fork of Flame repo for training of some new stuff in development☆19Feb 27, 2026Updated last week
- Code for "Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model", EMNLP Findings 20…☆28Nov 2, 2023Updated 2 years ago
- ☆23Jan 27, 2025Updated last year
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆106Mar 14, 2024Updated last year
- Self-Alignment with Principle-Following Reward Models☆170Sep 18, 2025Updated 5 months ago
- Track the progress of LLM context utilisation☆55Apr 14, 2025Updated 10 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- ☆87Dec 15, 2023Updated 2 years ago
- Code and Model for NeurIPS 2024 Spotlight Paper "Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training…☆44Oct 16, 2024Updated last year
- ☆129Jan 22, 2024Updated 2 years ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Apr 13, 2025Updated 10 months ago
- ☆44Jun 2, 2024Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Jun 25, 2024Updated last year
- ReCross: Unsupervised Cross-Task Generalization via Retrieval Augmentation☆24May 1, 2022Updated 3 years ago
- Resa: Transparent Reasoning Models via SAEs☆47Sep 23, 2025Updated 5 months ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆188Feb 17, 2026Updated 2 weeks ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorch☆12Jan 16, 2022Updated 4 years ago
- ACL24☆11Jun 7, 2024Updated last year
- Tiny evaluation of leading LLMs on competitive programming problems☆14Nov 28, 2024Updated last year
- JAX Scalify: end-to-end scaled arithmetics☆18Oct 30, 2024Updated last year
- Code for the paper "Closing the Curious Case of Neural Text Degeneration"☆11Apr 9, 2025Updated 10 months ago
- Official implementation for Text Generation Beyond Discrete Token Sampling☆21Aug 11, 2025Updated 6 months ago
- ☆10Oct 17, 2022Updated 3 years ago
- Scaling Data-Constrained Language Models☆342Jun 28, 2025Updated 8 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆92Oct 30, 2024Updated last year
- ☆30Jun 25, 2024Updated last year
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Mar 22, 2024Updated last year
- PROSE Public Benchmark Suite☆31Sep 15, 2025Updated 5 months ago
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆121Aug 16, 2023Updated 2 years ago