deep-diver / LLM-Pref-Mark-UILinks
☆37Updated 2 years ago
Alternatives and similar repositories for LLM-Pref-Mark-UI
Users that are interested in LLM-Pref-Mark-UI are comparing it to the libraries listed below
Sorting:
- ☆32Updated 2 years ago
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- ☆31Updated last year
- A library for squeakily cleaning and filtering language datasets.☆47Updated 2 years ago
- ☆27Updated last week
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- manage histories of LLM applied applications☆91Updated last year
- ☆23Updated 2 years ago
- minimal scripts for 24GB VRAM GPUs. training, inference, whatever☆41Updated 3 weeks ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated last month
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆41Updated 8 months ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆43Updated last year
- ☆87Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- ☆64Updated 2 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 10 months ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 9 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- ☆35Updated last year
- Simple GRPO scripts and configurations.☆59Updated 5 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆64Updated last year
- Comparing retrieval abilities from GPT4-Turbo and a RAG system on a toy example for various context lengths☆35Updated last year
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆28Updated last year