rasbt / low-rank-adaptation-blog
☆25Updated last year
Related projects ⓘ
Alternatives and complementary repositories for low-rank-adaptation-blog
- Code for NeurIPS LLM Efficiency Challenge☆54Updated 7 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆27Updated 4 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆64Updated last month
- Codebase accompanying the Summary of a Haystack paper.☆72Updated 2 months ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆37Updated 7 months ago
- ☆46Updated this week
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated 9 months ago
- ☆27Updated 5 months ago
- ☆47Updated last year
- A library for squeakily cleaning and filtering language datasets.☆45Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆28Updated 2 months ago
- ☆45Updated 2 months ago
- ☆93Updated last year
- ☆24Updated last year
- minimal pytorch implementation of bm25 (with sparse tensors)☆90Updated 8 months ago
- Collection of autoregressive model implementation☆67Updated this week
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆92Updated last year
- ☆28Updated 8 months ago
- ☆29Updated 4 months ago
- experiments with inference on llama☆105Updated 5 months ago
- QLoRA with Enhanced Multi GPU Support☆36Updated last year
- ☆112Updated last month
- A framework for few-shot evaluation of language models.☆18Updated 2 weeks ago
- ☆42Updated 4 months ago
- ☆29Updated 9 months ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆101Updated 3 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆52Updated last week
- ☆37Updated last year