wangitu / Ada-InstructLinks
☆17Updated last year
Alternatives and similar repositories for Ada-Instruct
Users that are interested in Ada-Instruct are comparing it to the libraries listed below
Sorting:
- ☆38Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated 9 months ago
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- Code for KaLM-Embedding models☆91Updated 2 months ago
- ☆54Updated 9 months ago
- This is the official repository for Inheritune.☆112Updated 6 months ago
- Data preparation code for Amber 7B LLM☆91Updated last year
- A repository for research on medium sized language models.☆78Updated last year
- ☆77Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆80Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆61Updated 10 months ago
- ☆48Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆85Updated 3 months ago
- Multi-Domain Expert Learning☆67Updated last year
- ☆49Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- 🚢 Data Toolkit for Sailor Language Models☆94Updated 6 months ago
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- ☆49Updated 6 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated 2 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated 2 weeks ago
- ☆37Updated 2 years ago
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- ☆51Updated last year