geronimi73 / phi2-finetuneLinks
☆88Updated last year
Alternatives and similar repositories for phi2-finetune
Users that are interested in phi2-finetune are comparing it to the libraries listed below
Sorting:
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Simple GRPO scripts and configurations.☆59Updated 6 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated 11 months ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆168Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- Just a bunch of benchmark logs for different LLMs☆120Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆105Updated 8 months ago
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- Collection of autoregressive model implementation☆86Updated 4 months ago
- ☆46Updated last year
- ☆54Updated 9 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- Verifiers for LLM Reinforcement Learning☆71Updated 4 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆46Updated 3 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 6 months ago
- minimal scripts for 24GB VRAM GPUs. training, inference, whatever☆41Updated 2 months ago
- ☆126Updated 10 months ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 3 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 7 months ago
- ☆40Updated last year
- Track the progress of LLM context utilisation☆55Updated 4 months ago
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 6 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆102Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 9 months ago
- ☆134Updated last year
- Functional Benchmarks and the Reasoning Gap☆88Updated 10 months ago
- ☆48Updated 11 months ago