rasbt / low-rank-adaptation-blog
☆28Updated 2 years ago
Alternatives and similar repositories for low-rank-adaptation-blog:
Users that are interested in low-rank-adaptation-blog are comparing it to the libraries listed below
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆42Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆57Updated last year
- ☆24Updated last year
- A library for squeakily cleaning and filtering language datasets.☆47Updated last year
- PyTorch implementation for MRL☆18Updated last year
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆57Updated last year
- [SIGIR 2024 (Demo)] CoSearchAgent: A Lightweight Collborative Search Agent with Large Language Models☆23Updated last year
- Training and Inference Notebooks for the RedPajama (OpenLlama) models☆18Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- ☆37Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 2 months ago
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆30Updated 7 months ago
- ☆28Updated last year
- Experiments with generating opensource language model assistants☆97Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- Reward Model framework for LLM RLHF☆61Updated last year
- SCREWS: A Modular Framework for Reasoning with Revisions☆27Updated last year
- Data preparation code for Amber 7B LLM☆88Updated 11 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- ☆33Updated 10 months ago
- ☆27Updated last month
- For experiments involving instruct gpt. Currently used for documenting open research questions.☆71Updated 2 years ago
- 🚢 Data Toolkit for Sailor Language Models☆88Updated 2 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- experiments with inference on llama☆104Updated 10 months ago
- Code and models for BERT on STILTs☆53Updated 2 years ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆76Updated 6 months ago
- Implementation of "SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models"☆27Updated 2 months ago
- ☆29Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year