lucidrains / llama-qrlhf
Implementation of the Llama architecture with RLHF + Q-learning
☆157Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for llama-qrlhf
- Implementation of Infini-Transformer in Pytorch☆104Updated last month
- Understand and test language model architectures on synthetic tasks.☆163Updated 6 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆84Updated this week
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆113Updated 7 months ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆205Updated 3 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆214Updated this week
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆119Updated 3 months ago
- Normalized Transformer (nGPT)☆87Updated this week
- ☆175Updated this week
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆109Updated last month
- Collection of autoregressive model implementation☆67Updated this week
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆112Updated 3 months ago
- σ-GPT: A New Approach to Autoregressive Models☆59Updated 3 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 6 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆85Updated 2 months ago
- ☆171Updated this week
- ☆73Updated 4 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆104Updated 2 months ago
- Some preliminary explorations of Mamba's context scaling.☆191Updated 9 months ago
- Language models scale reliably with over-training and on downstream tasks☆94Updated 7 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆90Updated 3 months ago
- ☆77Updated 7 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆169Updated 2 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆180Updated 5 months ago
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆88Updated 11 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆86Updated 5 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆226Updated 2 months ago
- Token Omission Via Attention☆121Updated last month
- WIP☆89Updated 3 months ago