Blaizzy / Coding-LLMs-from-scratchLinks
☆34Updated last year
Alternatives and similar repositories for Coding-LLMs-from-scratch
Users that are interested in Coding-LLMs-from-scratch are comparing it to the libraries listed below
Sorting:
- a simplified version of Google's Gemma model to be used for learning☆26Updated last year
- inference code for mixtral-8x7b-32kseqlen☆101Updated last year
- Video+code lecture on building nanoGPT from scratch☆68Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 11 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆161Updated 2 months ago
- Full finetuning of large language models without large memory requirements☆93Updated 3 weeks ago
- ☆136Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆178Updated last year
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆112Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- ☆67Updated last year
- ☆115Updated 9 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆83Updated last month
- Set of scripts to finetune LLMs☆38Updated last year
- ☆87Updated last year
- Distributed Inference for mlx LLm☆96Updated last year
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Updated last year
- Various installation guides for Large Language Models☆73Updated 5 months ago
- Fast parallel LLM inference for MLX☆220Updated last year
- Cerule - A Tiny Mighty Vision Model☆67Updated last year
- ☆120Updated last year
- Tune MPTs☆84Updated 2 years ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆201Updated last year
- One click templates for inferencing Language Models☆212Updated 2 months ago
- Training and Fine-tuning an llm in Python and PyTorch.☆42Updated 2 years ago
- An example implementation of RLHF (or, more accurately, RLAIF) built on MLX and HuggingFace.☆32Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year