FranxYao / Retrieval-Head-with-Flash-Attention
Efficient retrieval head analysis with triton flash attention that supports topK probability
☆12Updated 8 months ago
Alternatives and similar repositories for Retrieval-Head-with-Flash-Attention:
Users that are interested in Retrieval-Head-with-Flash-Attention are comparing it to the libraries listed below
- ☆10Updated 7 months ago
- Towards Systematic Measurement for Long Text Quality☆31Updated 5 months ago
- ☆28Updated last month
- Complexity Based Prompting for Multi-Step Reasoning☆17Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆55Updated 7 months ago
- ☆13Updated 11 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆35Updated 10 months ago
- ☆33Updated 10 months ago
- Code for our EMNLP-2023 paper: "Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks"☆24Updated last year
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated last year
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor