Jellyfish042 / Sudoku-RWKVLinks
☆142Updated 7 months ago
Alternatives and similar repositories for Sudoku-RWKV
Users that are interested in Sudoku-RWKV are comparing it to the libraries listed below
Sorting:
- RWKV-7: Surpassing GPT☆92Updated 7 months ago
- ☆59Updated 3 months ago
- ☆88Updated last month
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆122Updated this week
- A collection of tricks and tools to speed up transformer models☆170Updated last month
- A specialized RWKV-7 model for Othello(a.k.a. Reversi) that predicts legal moves, evaluates positions, and performs in-context search. It…☆41Updated 5 months ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆55Updated last month
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 2 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆101Updated 4 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆81Updated last month
- EvaByte: Efficient Byte-level Language Models at Scale☆103Updated 2 months ago
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆108Updated 2 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆128Updated 7 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 6 months ago
- Normalized Transformer (nGPT)☆184Updated 7 months ago
- RWKV-LM-V7(https://github.com/BlinkDL/RWKV-LM) Under Lightning Framework☆35Updated last week
- Fast modular code to create and train cutting edge LLMs☆67Updated last year
- RWKV in nanoGPT style☆191Updated last year
- Evaluating LLMs with Dynamic Data☆93Updated last month
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆87Updated 2 weeks ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆216Updated 3 weeks ago
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆91Updated last month
- ☆104Updated 2 months ago
- ☆280Updated last month
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆153Updated last year
- Inference of Mamba models in pure C☆188Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆341Updated 7 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 2 weeks ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆140Updated 4 months ago
- GRadient-INformed MoE☆263Updated 9 months ago