teknium1 / stanford_alpaca-replitLinks
Modified Stanford-Alpaca Trainer for Training Replit's Code Model
☆40Updated 2 years ago
Alternatives and similar repositories for stanford_alpaca-replit
Users that are interested in stanford_alpaca-replit are comparing it to the libraries listed below
Sorting:
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆68Updated last year
- ☆48Updated last year
- Chat Markup Language conversation library☆55Updated last year
- ☆72Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆118Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated last year
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- 🔓 The open-source autonomous agent LLM initiative 🔓☆91Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- A repository of prompts and Python scripts for intelligent transformation of raw text into diverse formats.☆30Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆167Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- ☆19Updated last year
- ☆113Updated 5 months ago
- ☆22Updated last year
- ☆131Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆122Updated last year
- KMD is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricaci…☆24Updated last year
- ☆57Updated last year
- Just a bunch of benchmark logs for different LLMs☆118Updated 10 months ago
- ☆134Updated last year
- Cerule - A Tiny Mighty Vision Model☆66Updated 8 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated last year
- ☆19Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆65Updated 2 years ago
- ☆66Updated last year
- Command-line script for inferencing from models such as falcon-7b-instruct☆74Updated last year