arjunbansal / awesome-oss-llm-ift-rlhfLinks
Collection of open source implementations of LLMs with IFT and RLHF that are striving to get to ChatGPT level of performance
☆51Updated last year
Alternatives and similar repositories for awesome-oss-llm-ift-rlhf
Users that are interested in awesome-oss-llm-ift-rlhf are comparing it to the libraries listed below
Sorting:
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- experiments with inference on llama☆104Updated last year
- ☆34Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Multi-Domain Expert Learning☆67Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated last month
- A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs☆114Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆168Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated 2 years ago
- batched loras☆344Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated 8 months ago
- ☆87Updated last year
- ☆199Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆106Updated 7 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- Code repository for the c-BTM paper☆106Updated last year
- ☆92Updated last year
- ☆84Updated last year
- ☆95Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆223Updated last year
- ☆74Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.☆310Updated 2 years ago
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆151Updated 10 months ago
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Pre-training code for CrystalCoder 7B LLM☆54Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year