hkproj / mistral-src-commented
Reference implementation of Mistral AI 7B v0.1 model.
☆27Updated 10 months ago
Related projects ⓘ
Alternatives and complementary repositories for mistral-src-commented
- Notes on the Mistral AI model☆18Updated 10 months ago
- LLaMA 2 implemented from scratch in PyTorch☆254Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆82Updated last year
- End-to-End LLM Guide☆97Updated 4 months ago
- Training and Fine-tuning an llm in Python and PyTorch.☆41Updated last year
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆47Updated 5 months ago
- Video+code lecture on building nanoGPT from scratch☆64Updated 5 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆93Updated last month
- Set of scripts to finetune LLMs☆36Updated 7 months ago
- ☆87Updated 9 months ago
- Notes on quantization in neural networks☆58Updated 11 months ago
- From scratch implementation of a vision language model in pure PyTorch☆162Updated 6 months ago
- Distributed training (multi-node) of a Transformer model☆43Updated 7 months ago
- a simplified version of Google's Gemma model to be used for learning☆23Updated 8 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆221Updated 2 weeks ago
- ☆93Updated last month
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆173Updated 4 months ago
- One click templates for inferencing Language Models☆119Updated this week
- ☆118Updated 3 months ago
- Collection of autoregressive model implementation☆67Updated this week
- LoRA and DoRA from Scratch Implementations☆188Updated 8 months ago
- An open source implementation of LFMs from Liquid AI: Liquid Foundation Models☆72Updated last month
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆118Updated last year
- Experimenting with small language models☆47Updated 10 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆97Updated last year
- My personal site☆69Updated 3 months ago
- RAFT, or Retrieval-Augmented Fine-Tuning, is a method comprising of a fine-tuning and a RAG-based retrieval phase. It is particularly sui…☆75Updated 2 months ago
- inference code for mixtral-8x7b-32kseqlen☆98Updated 11 months ago
- A compact LLM pretrained in 9 days by using high quality data☆262Updated last month
- customizable template GPT code designed for easy novel architecture experimentation☆25Updated 3 weeks ago