Blaizzy / Coding-LLMs-from-scratchLinks
☆36Updated last year
Alternatives and similar repositories for Coding-LLMs-from-scratch
Users that are interested in Coding-LLMs-from-scratch are comparing it to the libraries listed below
Sorting:
- Video+code lecture on building nanoGPT from scratch☆68Updated last year
- ☆137Updated last year
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆118Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated last year
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Updated last year
- Cerule - A Tiny Mighty Vision Model☆68Updated 2 months ago
- inference code for mixtral-8x7b-32kseqlen☆105Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated 4 months ago
- a simplified version of Google's Gemma model to be used for learning☆26Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆169Updated 5 months ago
- Scripts to create your own moe models using mlx☆90Updated last year
- ☆119Updated last year
- ☆75Updated last year
- MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers an…☆72Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆85Updated 5 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆43Updated 7 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- ☆68Updated last year
- Various installation guides for Large Language Models☆77Updated 9 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- A repository of Python scripts to scrape code contents of the public repositories of `huggingface`.☆53Updated last year
- ☆86Updated 2 years ago
- ☆127Updated 10 months ago
- A collection of optimizers for MLX☆54Updated last month
- 🚀 End-to-end examples and analysis of deploying LLMs serverless using Modal, Runpod, and Beam☆28Updated last year
- A collection of notebooks for the Hugging Face blog series (https://huggingface.co/blog).☆46Updated last year
- Fast parallel LLM inference for MLX☆245Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆161Updated 2 years ago