Re-Align / URIALLinks
☆310Updated last year
Alternatives and similar repositories for URIAL
Users that are interested in URIAL are comparing it to the libraries listed below
Sorting:
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆463Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆224Updated 7 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆379Updated 11 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆341Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆243Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆306Updated 9 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆336Updated 8 months ago
- ☆121Updated last year
- Generative Judge for Evaluating Alignment☆239Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆203Updated last year
- ☆288Updated 10 months ago
- Reproducible, flexible LLM evaluations☆213Updated last month
- DSIR large-scale data selection framework for language model training☆251Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆144Updated 7 months ago
- ☆317Updated 9 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆556Updated 6 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆498Updated 5 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆143Updated 9 months ago
- ☆270Updated 2 years ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆137Updated 7 months ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆396Updated last year
- Official repo for "Make Your LLM Fully Utilize the Context"☆252Updated last year
- Unofficial implementation of AlpaGasus☆91Updated last year
- FireAct: Toward Language Agent Fine-tuning☆279Updated last year
- Reformatted Alignment☆113Updated 8 months ago
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆184Updated last year
- Self-Alignment with Principle-Following Reward Models☆161Updated last month
- RewardBench: the first evaluation tool for reward models.☆604Updated last week
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆112Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆105Updated 4 months ago