antonio-f / mixture-of-experts-from-scratchLinks
Mixture of Experts from scratch
☆12Updated last year
Alternatives and similar repositories for mixture-of-experts-from-scratch
Users that are interested in mixture-of-experts-from-scratch are comparing it to the libraries listed below
Sorting:
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆118Updated 2 years ago
- An extension of the nanoGPT repository for training small MOE models.☆219Updated 9 months ago
- ☆228Updated 11 months ago
- Tutorial for how to build BERT from scratch☆100Updated last year
- First-principle implementations of groundbreaking AI algorithms using a wide range of deep learning frameworks, accompanied by supporting…☆180Updated 5 months ago
- Distributed training (multi-node) of a Transformer model☆90Updated last year
- Notes and commented code for RLHF (PPO)☆120Updated last year
- This repository contains an exhaustive coverage of a hands on approach to PyTorch along side powerful tools to accelerate model tuning an…☆207Updated last week
- LLaMA 2 implemented from scratch in PyTorch☆363Updated 2 years ago
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆161Updated 3 weeks ago
- making the official triton tutorials actually comprehensible☆80Updated 4 months ago
- ☆45Updated 7 months ago
- LoRA and DoRA from Scratch Implementations☆214Updated last year
- Building a 2.3M-parameter LLM from scratch with LLaMA 1 architecture.☆195Updated last year
- Survey: A collection of AWESOME papers and resources on the latest research in Mixture of Experts.☆139Updated last year
- ☆45Updated 7 months ago
- Implementation of BERT-based Language Models☆24Updated last year
- ☆80Updated last year
- Notes on the Mamba and the S4 model (Mamba: Linear-Time Sequence Modeling with Selective State Spaces)☆175Updated last year
- 🧠 A study guide to learn about Transformers☆12Updated last year
- GPU Kernels☆212Updated 7 months ago
- ☆89Updated 8 months ago
- ☆225Updated last month
- Notes on Direct Preference Optimization☆23Updated last year
- Notes on quantization in neural networks☆114Updated 2 years ago
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆194Updated last year
- ☆99Updated last year
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆49Updated last year
- Advanced NLP, Spring 2025 https://cmu-l3.github.io/anlp-spring2025/☆69Updated 8 months ago
- Implementations of a Mixture-of-Experts (MoE) architecture designed for research on large language models (LLMs) and scalable neural netw…☆36Updated 8 months ago