usamec / lowmem_finetuningLinks
Low memory full parameter finetuning of LLMs
☆53Updated 2 months ago
Alternatives and similar repositories for lowmem_finetuning
Users that are interested in lowmem_finetuning are comparing it to the libraries listed below
Sorting:
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 5 months ago
- An introduction to LLM Sampling☆79Updated 9 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 8 months ago
- A collection of lightweight interpretability scripts to understand how LLMs think☆56Updated last week
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆98Updated 2 months ago
- Simple GRPO scripts and configurations.☆59Updated 8 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 7 months ago
- ☆46Updated 6 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆83Updated last month
- code for training & evaluating Contextual Document Embedding models☆197Updated 4 months ago
- LLM training in simple, raw C/CUDA☆15Updated 10 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆146Updated this week
- smolLM with Entropix sampler on pytorch☆150Updated 11 months ago
- Collection of autoregressive model implementation☆86Updated 5 months ago
- look how they massacred my boy☆63Updated 11 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 6 months ago
- ☆68Updated 4 months ago
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆112Updated last year
- Exploring Applications of GRPO☆250Updated last month
- Simple & Scalable Pretraining for Neural Architecture Research☆296Updated last month
- ☆71Updated 3 months ago
- Simple repository for training small reasoning models☆40Updated 8 months ago
- Project code for training LLMs to write better unit tests + code☆21Updated 4 months ago
- ☆48Updated last year
- One click away from a locally downloaded, fine-tuned model, hosted on hugging face, with inference built in. In two hours.☆23Updated 6 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 8 months ago
- smolbox of recipies☆28Updated 5 months ago
- ☆88Updated last year
- KernelBench v2: Can LLMs Write GPU Kernels? - Benchmark with Torch -> Triton (and more!) problems☆21Updated 3 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆99Updated 2 months ago