wangitu / Ada-InstructLinks
☆17Updated last year
Alternatives and similar repositories for Ada-Instruct
Users that are interested in Ada-Instruct are comparing it to the libraries listed below
Sorting:
- ☆41Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- [TMLR 2026] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆122Updated 11 months ago
- ☆50Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated last year
- ☆48Updated last year
- Multi-Domain Expert Learning☆67Updated 2 years ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆81Updated 2 years ago
- Code for NeurIPS LLM Efficiency Challenge☆60Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated last week
- Data preparation code for Amber 7B LLM☆94Updated last year
- 🚢 Data Toolkit for Sailor Language Models☆95Updated 11 months ago
- ☆34Updated last year
- ☆56Updated last year
- ☆51Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- Pre-training code for CrystalCoder 7B LLM☆57Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- ☆78Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- ☆16Updated last year
- ☆37Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆73Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated 2 years ago