emrgnt-cmplxty / SmolTrainerLinks
☆21Updated 2 years ago
Alternatives and similar repositories for SmolTrainer
Users that are interested in SmolTrainer are comparing it to the libraries listed below
Sorting:
- ☆74Updated 2 years ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Updated 2 years ago
- ☆45Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆111Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆68Updated last month
- various experiments for scaling inference time compute with small reasoning models☆17Updated 11 months ago
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- ☆55Updated last year
- GPT-2 small trained on phi-like data☆67Updated last year
- Function Calling Benchmark & Testing☆92Updated last year
- A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ☆63Updated 2 years ago
- ☆117Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- entropix style sampling + GUI☆27Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆161Updated 2 years ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 11 months ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆73Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 3 months ago
- Tokun to can tokens☆18Updated 6 months ago
- Multi-Domain Expert Learning☆67Updated last year
- Simple and fast server for GPTQ-quantized LLaMA inference☆24Updated 2 years ago
- ☆68Updated last year