Sequence-level 1F1B schedule for LLMs.
☆38Aug 26, 2025Updated 7 months ago
Alternatives and similar repositories for Seq1F1B
Users that are interested in Seq1F1B are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Sequence-level 1F1B schedule for LLMs.☆19Jun 4, 2024Updated last year
- Distributed IO-aware Attention algorithm☆24Sep 24, 2025Updated 6 months ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- Vocabulary Parallelism☆25Mar 10, 2025Updated last year
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆54Jul 15, 2025Updated 9 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated 11 months ago
- Allow torch tensor memory to be released and resumed later☆233Mar 10, 2026Updated last month
- LLM training technologies developed by kwai☆71Jan 21, 2026Updated 2 months ago
- ☆47Sep 8, 2025Updated 7 months ago
- A benchmark suited especially for deep learning operators☆42Feb 13, 2023Updated 3 years ago
- ☆26Dec 5, 2022Updated 3 years ago
- Canvas: End-to-End Kernel Architecture Search in Neural Networks☆27Nov 18, 2024Updated last year
- CUDA SGEMM optimization note☆15Oct 31, 2023Updated 2 years ago
- Ring attention implementation with flash attention☆1,006Sep 10, 2025Updated 7 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆97Apr 7, 2026Updated last week
- ☆24Aug 15, 2023Updated 2 years ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆116Mar 20, 2025Updated last year
- A lightweight design for computation-communication overlap.☆226Jan 20, 2026Updated 2 months ago
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆102Sep 11, 2025Updated 7 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆95Apr 6, 2026Updated last week
- ☆635Jan 14, 2026Updated 3 months ago
- ☆14Oct 3, 2024Updated last year
- ☆13Feb 22, 2023Updated 3 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆63Jul 21, 2024Updated last year
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆56Updated this week
- SOTA Learning-augmented Systems☆37May 21, 2022Updated 3 years ago
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆506Apr 9, 2026Updated last week
- 清华大学宿舍洗衣机空闲提醒小程序☆14Feb 4, 2021Updated 5 years ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Jul 4, 2025Updated 9 months ago
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆105Dec 17, 2025Updated 3 months ago
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆182Mar 17, 2026Updated 3 weeks ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆236Jan 14, 2026Updated 3 months ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆78May 4, 2021Updated 4 years ago
- PyTorch centric eager mode debugger☆48Dec 16, 2024Updated last year
- [ICML 2024] Sparse Model Inversion: Efficient Inversion of Vision Transformers with Less Hallucination☆14Apr 29, 2025Updated 11 months ago
- ☆38Jan 15, 2021Updated 5 years ago
- Surrogate-based Hyperparameter Tuning System☆30Jun 29, 2023Updated 2 years ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,286Aug 28, 2025Updated 7 months ago
- Read audio with FFmpeg into NumPy/PyTorch via ctypes (standard library module)☆11Aug 12, 2020Updated 5 years ago