rawsh / mirrorllmLinks
☆17Updated 7 months ago
Alternatives and similar repositories for mirrorllm
Users that are interested in mirrorllm are comparing it to the libraries listed below
Sorting:
- Simple GRPO scripts and configurations.☆59Updated 6 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 6 months ago
- ☆20Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆88Updated 10 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 6 months ago
- ☆74Updated last year
- ☆49Updated 6 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆105Updated 8 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- QLoRA for Masked Language Modeling☆22Updated last year
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year
- ☆47Updated last year
- ☆54Updated 9 months ago
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- ☆88Updated last year
- entropix style sampling + GUI☆27Updated 9 months ago
- Let's create synthetic textbooks together :)☆75Updated last year
- Synthetic Data for LLM Fine-Tuning☆120Updated last year
- One Line To Build Zero-Data Classifiers in Minutes☆58Updated 10 months ago
- Multi-Domain Expert Learning☆67Updated last year
- Score LLM pretraining data with classifiers☆55Updated last year
- Chat Markup Language conversation library☆55Updated last year
- Small, simple agent task environments for training and evaluation☆18Updated 9 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆65Updated last year
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆45Updated 3 months ago
- Train your own SOTA deductive reasoning model☆104Updated 5 months ago
- Project code for training LLMs to write better unit tests + code☆21Updated 3 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆175Updated last year