hkproj / mistral-src-commentedLinks
Reference implementation of Mistral AI 7B v0.1 model.
☆28Updated last year
Alternatives and similar repositories for mistral-src-commented
Users that are interested in mistral-src-commented are comparing it to the libraries listed below
Sorting:
- One click templates for inferencing Language Models☆213Updated last month
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated 11 months ago
- Notes on the Mistral AI model☆20Updated last year
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆48Updated last year
- Various installation guides for Large Language Models☆74Updated 4 months ago
- ☆44Updated 3 months ago
- GPU Kernels☆193Updated 4 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆116Updated 2 years ago
- ML algorithms implementations that are good for learning the underlying principles☆24Updated 9 months ago
- From scratch implementation of a vision language model in pure PyTorch☆239Updated last year
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆308Updated 2 years ago
- An extension of the nanoGPT repository for training small MOE models.☆187Updated 6 months ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆125Updated 2 years ago
- Fine tune Gemma 3 on an object detection task☆84Updated 2 months ago
- Notes on quantization in neural networks☆98Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆350Updated last year
- Fine-tune an LLM to perform batch inference and online serving.☆112Updated 3 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 10 months ago
- making the official triton tutorials actually comprehensible☆54Updated 3 weeks ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 3 months ago
- Video+code lecture on building nanoGPT from scratch☆69Updated last year
- Set of scripts to finetune LLMs☆38Updated last year
- ☆44Updated 4 months ago
- ☆54Updated 3 weeks ago
- Distributed training (multi-node) of a Transformer model☆83Updated last year
- ☆75Updated 11 months ago
- LoRA and DoRA from Scratch Implementations☆212Updated last year
- customizable template GPT code designed for easy novel architecture experimentation☆26Updated 5 months ago
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆184Updated last year
- A compact LLM pretrained in 9 days by using high quality data☆323Updated 5 months ago