rasbt / low-rank-adaptation-blogLinks
☆29Updated 2 years ago
Alternatives and similar repositories for low-rank-adaptation-blog
Users that are interested in low-rank-adaptation-blog are comparing it to the libraries listed below
Sorting:
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆43Updated last year
- Reward Model framework for LLM RLHF☆61Updated 2 years ago
- Data preparation code for Amber 7B LLM☆91Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆168Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated 9 months ago
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- For experiments involving instruct gpt. Currently used for documenting open research questions.☆71Updated 2 years ago
- Python tools for processing the stackexchange data dumps into a text dataset for Language Models☆83Updated last year
- experiments with inference on llama☆104Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 6 months ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated 2 years ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- ☆44Updated 8 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆105Updated 8 months ago
- Open Implementations of LLM Analyses☆106Updated 10 months ago
- This project studies the performance and robustness of language models and task-adaptation methods.☆150Updated last year
- ☆88Updated last year
- ☆29Updated last week
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- Safety Score for Pre-Trained Language Models☆95Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆269Updated last year
- ☆69Updated last year
- ☆125Updated 10 months ago
- Code repository for the c-BTM paper☆107Updated last year
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆97Updated last year
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- This project shows how to derive the total number of training tokens from a large text dataset from 🤗 datasets with Apache Beam and Data…☆27Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Transformers at any scale☆41Updated last year