FranxYao / Retrieval-Head-with-Flash-AttentionLinks
Efficient retrieval head analysis with triton flash attention that supports topK probability
☆13Updated last year
Alternatives and similar repositories for Retrieval-Head-with-Flash-Attention
Users that are interested in Retrieval-Head-with-Flash-Attention are comparing it to the libraries listed below
Sorting:
- ☆55Updated last year
- ☆30Updated 11 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Code for "[COLM'25] RepoST: Scalable Repository-Level Coding Environment Construction with Sandbox Testing"☆22Updated 8 months ago
- ☆12Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated 2 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆63Updated last year
- Towards Systematic Measurement for Long Text Quality☆37Updated last year
- ☆14Updated last year
- ☆108Updated 5 months ago
- Complexity Based Prompting for Multi-Step Reasoning☆17Updated 2 years ago
- Analyzing LLM Alignment via Token distribution shift☆17Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs