hkproj / mistral-src-commentedLinks
Reference implementation of Mistral AI 7B v0.1 model.
☆28Updated last year
Alternatives and similar repositories for mistral-src-commented
Users that are interested in mistral-src-commented are comparing it to the libraries listed below
Sorting:
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆112Updated 2 years ago
- One click templates for inferencing Language Models☆201Updated 2 weeks ago
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆175Updated 11 months ago
- Notes on quantization in neural networks☆95Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆343Updated last year
- ☆43Updated 2 months ago
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆296Updated 2 years ago
- Notes on the Mistral AI model☆20Updated last year
- Various installation guides for Large Language Models☆72Updated 3 months ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆48Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated 10 months ago
- ML algorithms implementations that are good for learning the underlying principles☆25Updated 7 months ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆125Updated last year
- GPU Kernels☆191Updated 3 months ago
- Distributed training (multi-node) of a Transformer model☆76Updated last year
- Fine-tune an LLM to perform batch inference and online serving.☆112Updated 2 months ago
- From scratch implementation of a vision language model in pure PyTorch☆231Updated last year
- making the official triton tutorials actually comprehensible☆53Updated 2 weeks ago
- An extension of the nanoGPT repository for training small MOE models.☆164Updated 4 months ago
- ☆54Updated 5 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 9 months ago
- ☆260Updated last month
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆284Updated 5 months ago
- LoRA and DoRA from Scratch Implementations☆207Updated last year
- A compact LLM pretrained in 9 days by using high quality data☆320Updated 3 months ago
- Banishing LLM Hallucinations Requires Rethinking Generalization☆276Updated last year
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMs☆313Updated 3 weeks ago
- customizable template GPT code designed for easy novel architecture experimentation☆26Updated 4 months ago
- a LLM cookbook, for building your own from scratch, all the way from gathering data to training a model☆148Updated last year
- ☆46Updated 4 months ago