hkproj / mistral-src-commentedLinks
Reference implementation of Mistral AI 7B v0.1 model.
☆29Updated last year
Alternatives and similar repositories for mistral-src-commented
Users that are interested in mistral-src-commented are comparing it to the libraries listed below
Sorting:
- ☆39Updated last month
- Notes on the Mistral AI model☆19Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆109Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆111Updated 8 months ago
- ☆54Updated 4 months ago
- Fine-tune an LLM to perform batch inference and online serving.☆112Updated 3 weeks ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆47Updated last year
- Notes on quantization in neural networks☆86Updated last year
- ML algorithms implementations that are good for learning the underlying principles☆24Updated 6 months ago
- Various installation guides for Large Language Models☆70Updated 2 months ago
- making the official triton tutorials actually comprehensible☆41Updated 3 months ago
- An overview of GRPO & DeepSeek-R1 Training with Open Source GRPO Model Fine Tuning☆32Updated last month
- From scratch implementation of a vision language model in pure PyTorch☆222Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆185Updated 3 weeks ago
- Collection of autoregressive model implementation☆85Updated 2 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- Direct Preference Optimization Implementation☆16Updated last year
- ☆46Updated 2 months ago
- An extension of the nanoGPT repository for training small MOE models.☆152Updated 3 months ago
- ☆41Updated last month
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 7 months ago
- Testing and evaluating the capabilities of Vision-Language models (PaliGemma) in performing computer vision tasks such as object detectio…☆81Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆197Updated last year
- Notes about LLaMA 2 model☆61Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆80Updated last month
- Video+code lecture on building nanoGPT from scratch☆68Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 3 months ago
- ☆174Updated 5 months ago
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆29Updated 4 months ago
- Notes on Direct Preference Optimization☆19Updated last year