GoldFinch and other hybrid transformer components
☆45Jul 20, 2024Updated last year
Alternatives and similar repositories for GoldFinch-paper
Users that are interested in GoldFinch-paper are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- GoldFinch and other hybrid transformer components☆12Dec 9, 2025Updated 3 months ago
- Mini Model Daemon☆12Nov 9, 2024Updated last year
- ☆17Jan 1, 2025Updated last year
- ☆11Oct 11, 2023Updated 2 years ago
- Fast modular code to create and train cutting edge LLMs☆68May 16, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- ☆41Apr 30, 2025Updated 10 months ago
- ☆27Feb 26, 2026Updated last month
- RADLADS training code☆37May 7, 2025Updated 10 months ago
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆63Sep 19, 2025Updated 6 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆34Aug 14, 2024Updated last year
- RWKV-7 mini☆12Mar 29, 2025Updated last year
- A repository for research on medium sized language models.☆78May 23, 2024Updated last year
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆48Oct 21, 2025Updated 5 months ago
- Some preliminary explorations of Mamba's context scaling.☆13Dec 18, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆12Dec 14, 2024Updated last year
- RWKV6 in native pytorch and triton:)☆11Aug 4, 2024Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Aug 13, 2024Updated last year
- ☆54May 20, 2024Updated last year
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Nov 3, 2023Updated 2 years ago
- continous batching and parallel acceleration for RWKV6☆22Jun 28, 2024Updated last year
- State tuning tunes the state☆35Feb 12, 2025Updated last year
- The official implementation of ICLR 2025 paper "Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models".☆18Apr 25, 2025Updated 11 months ago
- RWKV-LM-V7(https://github.com/BlinkDL/RWKV-LM) Under Lightning Framework☆57Dec 24, 2025Updated 3 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆20Aug 1, 2024Updated last year
- RWKV-7: Surpassing GPT☆104Nov 17, 2024Updated last year
- ☆81May 15, 2024Updated last year
- ☆63Oct 3, 2024Updated last year
- Direct Preference Optimization for RWKV, aiming for RWKV-5 and 6.☆11Mar 1, 2024Updated 2 years ago
- 用户友好、开箱即用的 RWKV Prompts 示例,适用于所有用户。Awesome RWKV Prompts for general users, more user-friendly, ready-to-use prompt examples.☆35Jan 24, 2025Updated last year
- Fluid Language Model Benchmarking☆27Sep 16, 2025Updated 6 months ago
- ☆34Jul 21, 2024Updated last year
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆32Apr 9, 2025Updated 11 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆126Feb 4, 2026Updated last month
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆48Aug 22, 2025Updated 7 months ago
- MetaLadder: Ascending Mathematical Solution Quality via Analogical-Problem Reasoning Transfer (EMNLP 2025)☆12Apr 18, 2025Updated 11 months ago
- VisualRWKV is the visual-enhanced version of the RWKV language model, enabling RWKV to handle various visual tasks.☆244Jan 13, 2026Updated 2 months ago
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity (ACL 2025, oral)☆32Jun 14, 2025Updated 9 months ago
- ☆24Dec 11, 2024Updated last year