winstonsmith1897 / GTPOLinks
☆34Updated last month
Alternatives and similar repositories for GTPO
Users that are interested in GTPO are comparing it to the libraries listed below
Sorting:
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 5 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆92Updated 5 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 7 months ago
- ☆136Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆109Updated 5 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- Modeling code for a BitNet b1.58 Llama-style model.☆25Updated last year
- ☆55Updated 11 months ago
- Official Repository for Task-Circuit Quantization☆24Updated 4 months ago
- ☆80Updated 11 months ago
- A repository for research on medium sized language models.☆78Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- RWKV-7: Surpassing GPT☆98Updated 11 months ago
- PyTorch implementation of models from the Zamba2 series.☆185Updated 8 months ago
- entropix style sampling + GUI☆27Updated 11 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated this week
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- Implementation of the Mamba SSM with hf_integration.☆56Updated last year
- ☆62Updated 3 months ago
- [ACL 2025] How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training☆45Updated 3 months ago
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆53Updated last year
- Library to facilitate pruning of LLMs based on context☆32Updated last year
- ☆38Updated 5 months ago
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆45Updated this week
- ☆14Updated 3 months ago
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- ☆63Updated last year
- Simple GRPO scripts and configurations.☆59Updated 8 months ago
- ☆136Updated last month
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year