lucidrains / llama-qrlhf
Implementation of the Llama architecture with RLHF + Q-learning
☆157Updated last year
Alternatives and similar repositories for llama-qrlhf:
Users that are interested in llama-qrlhf are comparing it to the libraries listed below
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆121Updated 9 months ago
- Implementation of Infini-Transformer in Pytorch☆109Updated 3 weeks ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆113Updated 3 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆91Updated 2 months ago
- ☆180Updated this week
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆119Updated 6 months ago
- ☆78Updated 9 months ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆205Updated 5 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆92Updated 5 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆150Updated 3 weeks ago
- Understand and test language model architectures on synthetic tasks.☆177Updated 2 weeks ago
- Collection of autoregressive model implementation☆77Updated 3 weeks ago
- A MAD laboratory to improve AI architecture designs 🧪☆102Updated last month
- ☆164Updated last year
- ☆75Updated 6 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆115Updated 5 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆95Updated last month
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆225Updated 4 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆79Updated 2 years ago
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 9 months ago
- Normalized Transformer (nGPT)☆146Updated 2 months ago
- ☆66Updated 6 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆183Updated 8 months ago
- ☆60Updated last year
- Muon optimizer for neural networks: >30% extra sample efficiency, <3% wallclock overhead☆220Updated 3 weeks ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆219Updated last month
- ☆53Updated last year
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆88Updated last year
- Language models scale reliably with over-training and on downstream tasks☆96Updated 9 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆50Updated 9 months ago