geronimi73 / phi2-finetuneLinks
☆86Updated last year
Alternatives and similar repositories for phi2-finetune
Users that are interested in phi2-finetune are comparing it to the libraries listed below
Sorting:
- Small and Efficient Mathematical Reasoning LLMs☆72Updated last year
- Simple GRPO scripts and configurations.☆59Updated 9 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆109Updated 11 months ago
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- Just a bunch of benchmark logs for different LLMs☆118Updated last year
- Score LLM pretraining data with classifiers☆54Updated 2 years ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 9 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated 2 years ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆50Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated 2 years ago
- ☆23Updated 2 years ago
- Collection of autoregressive model implementation☆86Updated 6 months ago
- ☆55Updated last year
- ☆51Updated 9 months ago
- ☆45Updated 2 years ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 5 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆106Updated last month
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated 3 weeks ago
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- ☆85Updated 2 years ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 10 months ago
- ☆69Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated last year
- ☆94Updated 2 years ago
- Set of scripts to finetune LLMs☆38Updated last year