kyegomez / GPT3Links
An implementation of the base GPT-3 Model architecture from the paper by OPENAI "Language Models are Few-Shot Learners"
☆19Updated last year
Alternatives and similar repositories for GPT3
Users that are interested in GPT3 are comparing it to the libraries listed below
Sorting:
- Implementation of Infini-Transformer in Pytorch☆112Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆170Updated 11 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆121Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- Collection of autoregressive model implementation☆85Updated this week
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Unofficial Implementation of Evolutionary Model Merging☆41Updated last year
- We study toy models of skill learning.☆31Updated 11 months ago
- Implementation of the general framework for AMIE, from the paper "Towards Conversational Diagnostic AI", out of Google Deepmind☆72Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆179Updated last year
- Implementation of Agent Attention in Pytorch☆93Updated last year
- ☆71Updated last year
- MatFormer repo☆67Updated last year
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated 3 weeks ago
- Model Stock: All we need is just a few fine-tuned models☆128Updated 5 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆31Updated last year
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆57Updated last year
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆81Updated 2 years ago
- PyTorch implementation of moe, which stands for mixture of experts☆51Updated 4 years ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Updated 8 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 10 months ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆55Updated 9 months ago
- Implementation of Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer in PyTorch.☆52Updated 2 years ago
- A repository for research on medium sized language models.☆77Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆74Updated 6 months ago
- ☆82Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- Implementation of a modular, high-performance, and simplistic mamba for high-speed applications☆40Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆57Updated 7 months ago