step-law / steplaw
☆139Updated 2 weeks ago
Alternatives and similar repositories for steplaw:
Users that are interested in steplaw are comparing it to the libraries listed below
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆158Updated last week
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆212Updated this week
- ☆166Updated last month
- ☆182Updated 5 months ago
- qwen-nsa☆42Updated last week
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆145Updated last week
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆242Updated 2 months ago
- The related works and background techniques about Openai o1☆217Updated 2 months ago
- A Comprehensive Survey on Long Context Language Modeling☆86Updated last week
- A Survey on Efficient Reasoning for LLMs☆116Updated this week
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆87Updated 2 weeks ago
- ☆186Updated this week
- ☆71Updated last week
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆168Updated last month
- DeepSeek Native Sparse Attention pytorch implementation☆46Updated 3 weeks ago
- SOTA RL fine-tuning solution for advanced math reasoning of LLM☆91Updated this week
- ☆60Updated 4 months ago
- Paper list for Efficient Reasoning.☆311Updated this week
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆148Updated last week
- ☆124Updated 3 weeks ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆131Updated last month
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆98Updated 2 weeks ago
- ☆70Updated 2 weeks ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆177Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆130Updated 9 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆145Updated this week
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆166Updated 3 weeks ago
- 🔥 A minimal training framework for scaling FLA models☆82Updated this week
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆96Updated last month
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆124Updated this week