Sequence-level 1F1B schedule for LLMs.
☆19Jun 4, 2024Updated last year
Alternatives and similar repositories for Seq1F1B
Users that are interested in Seq1F1B are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Sequence-level 1F1B schedule for LLMs.☆37Aug 26, 2025Updated 8 months ago
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆53Jul 15, 2025Updated 9 months ago
- ☆17May 10, 2024Updated last year
- Vocabulary Parallelism☆26Mar 10, 2025Updated last year
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Deferred Continuous Batching in Resource-Efficient Large Language Model Serving (EuroMLSys 2024)☆19May 28, 2024Updated last year
- Schedule free optimiser implemented in JAX using Optimistix☆15May 29, 2024Updated last year
- ☆84Feb 11, 2026Updated 2 months ago
- ☆11Feb 28, 2023Updated 3 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆11Dec 13, 2023Updated 2 years ago
- Official implementation of Acc-SpMM: Accelerating General-purpose Sparse Matrix-Matrix Multiplication with GPU Tensor Cores.☆14Nov 13, 2025Updated 5 months ago
- The code for our paper "Neural Architecture Search as Program Transformation Exploration"☆16Apr 28, 2021Updated 5 years ago
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆25May 12, 2025Updated 11 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated last year
- Large language models to diffusion finetuning code☆26Jun 2, 2025Updated 11 months ago
- ☆36Jun 10, 2024Updated last year
- Scaling Sparse Fine-Tuning to Large Language Models☆19Jan 31, 2024Updated 2 years ago
- This repo contains the scripts used to create the data for the ATC2020 paper "Reconstructing proprietary video streaming algorithms"☆14Mar 24, 2021Updated 5 years ago
- Original code base for On Pretraining Data Diversity for Self-Supervised Learning☆14Dec 30, 2024Updated last year
- Code for reproducing experiments performed for Accoridon☆13Jun 11, 2021Updated 4 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆36Jan 9, 2023Updated 3 years ago
- Fast and memory-efficient exact attention☆22Apr 10, 2026Updated 3 weeks ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Official code repository of Shuffle-R1☆25Feb 23, 2026Updated 2 months ago
- ☆24Nov 27, 2025Updated 5 months ago
- FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]☆49Apr 29, 2026Updated last week
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆35Aug 14, 2024Updated last year
- Neurocomputing "Deep Multi-Center Learning for Face Alignment"☆12Mar 28, 2020Updated 6 years ago
- ☆101Feb 11, 2026Updated 2 months ago
- ☆13Dec 18, 2020Updated 5 years ago
- ☆11Feb 23, 2024Updated 2 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆78Jun 17, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- SGLang Kernel Wheel Index☆22Updated this week
- MUA-RL: MULTI-TURN USER-INTERACTING AGENT REINFORCEMENT LEARNING FOR AGENTIC TOOL USE☆58Nov 5, 2025Updated 6 months ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Open source version of DOCA GPUNetIO and DOCA Verbs libraries (limited features) to enable GDAKI technology on RDMA (IB and RoCE)☆46May 1, 2026Updated last week
- Intelligent Resource Requirement Estimation and Scheduling for Deep Learning Jobs on Distributed GPU Clusters☆15Nov 18, 2021Updated 4 years ago
- A Generic Resource-Aware Hyperparameter Tuning Execution Engine☆15Jan 8, 2022Updated 4 years ago
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago