FranxYao / Retrieval-Head-with-Flash-Attention
Efficient retrieval head analysis with triton flash attention that supports topK probability
☆12Updated 7 months ago
Alternatives and similar repositories for Retrieval-Head-with-Flash-Attention:
Users that are interested in Retrieval-Head-with-Flash-Attention are comparing it to the libraries listed below
- ☆10Updated 6 months ago
- Towards Systematic Measurement for Long Text Quality☆31Updated 4 months ago
- ☆26Updated 3 weeks ago
- ☆47Updated 9 months ago
- Methods and evaluation for aligning language models temporally☆27Updated 10 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆55Updated 6 months ago
- Long Context Extension and Generalization in LLMs☆40Updated 3 months ago
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆32Updated 3 months ago
- ☆16Updated 10 months ago
- ☆33Updated 9 months ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated last year
- Complexity Based Prompting for Multi-Step Reasoning☆16Updated last year
- ☆12Updated last year
- ☆28Updated last year
- AbstainQA, ACL 2024☆25Updated 3 months ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆35Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 5 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated 10 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated 3 weeks ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆71Updated 7 months ago
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor☆28Updated 9 months ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆18Updated 5 months ago
- Code for our EMNLP-2023 paper: "Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks"☆24Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆57Updated 2 months ago
- Evaluate the Quality of Critique☆35Updated 7 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆22Updated 3 months ago
- ☆30Updated 4 months ago
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆53Updated 4 months ago
- The code and data for the paper JiuZhang3.0☆40Updated 7 months ago
- Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆68Updated last month