hkproj / mistral-src-commented
Reference implementation of Mistral AI 7B v0.1 model.
☆28Updated last year
Alternatives and similar repositories for mistral-src-commented:
Users that are interested in mistral-src-commented are comparing it to the libraries listed below
- Notes on the Mistral AI model☆18Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆94Updated last year
- ML algorithms implementations that are good for learning the underlying principles☆19Updated 2 months ago
- Notes on quantization in neural networks☆70Updated last year
- End-to-End LLM Guide☆101Updated 7 months ago
- customizable template GPT code designed for easy novel architecture experimentation☆26Updated 3 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆230Updated 3 months ago
- One click templates for inferencing Language Models☆155Updated 2 weeks ago
- LoRA and DoRA from Scratch Implementations☆196Updated 11 months ago
- ☆128Updated last month
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆47Updated 8 months ago
- Video+code lecture on building nanoGPT from scratch☆65Updated 8 months ago
- From scratch implementation of a vision language model in pure PyTorch☆194Updated 9 months ago
- Set of scripts to finetune LLMs☆36Updated 10 months ago
- LLaMA 2 implemented from scratch in PyTorch☆294Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆103Updated 4 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆191Updated 7 months ago
- Notes about LLaMA 2 model☆53Updated last year
- ☆32Updated 8 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆51Updated 10 months ago
- A new benchmark for measuring LLM's capability to detect bugs in large codebase.☆29Updated 8 months ago
- Various installation guides for Large Language Models☆63Updated 3 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆197Updated 9 months ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆124Updated last year
- PyTorch implementation of models from the Zamba2 series.☆176Updated 3 weeks ago
- RAFT, or Retrieval-Augmented Fine-Tuning, is a method comprising of a fine-tuning and a RAG-based retrieval phase. It is particularly sui…☆85Updated 5 months ago
- ☆156Updated last week
- This repository contains the code for dataset curation and finetuning of instruct variant of the Bilingual OpenHathi model. The resultin…☆23Updated last year
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆63Updated 3 months ago
- ☆32Updated 2 weeks ago