☆79Dec 15, 2023Updated 2 years ago
Alternatives and similar repositories for AscendSpeed
Users that are interested in AscendSpeed are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆53Mar 18, 2026Updated last week
- Tensorflow implementation of DeepMind's Tacotron-2 (without wavenet)☆11Jul 12, 2019Updated 6 years ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitcode.com/Ascend/pytorch☆494Updated this week
- ☆47Dec 13, 2024Updated last year
- ☆18Mar 4, 2025Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆18Jun 8, 2021Updated 4 years ago
- ☆11Dec 9, 2025Updated 3 months ago
- ☆20Sep 28, 2024Updated last year
- 🌟Official code of our AAAI26 paper 🔍WebFilter☆38Nov 9, 2025Updated 4 months ago
- Manages vllm-nccl dependency☆17Jun 3, 2024Updated last year
- ☆23Jan 7, 2022Updated 4 years ago
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆40Mar 17, 2024Updated 2 years ago
- ☆220Aug 17, 2023Updated 2 years ago
- ☆50Jul 2, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,438Mar 20, 2024Updated 2 years ago
- play gemm with tvm☆92Jul 22, 2023Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Jun 12, 2024Updated last year
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39May 8, 2024Updated last year
- ☆115Aug 26, 2024Updated last year
- Implementation of Global Style Token Tacotron in TensorFlow2☆26Sep 28, 2020Updated 5 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,236Aug 14, 2025Updated 7 months ago
- ☆185Jan 28, 2026Updated last month
- ☆30Sep 4, 2023Updated 2 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- LLM training technologies developed by kwai☆71Jan 21, 2026Updated 2 months ago
- Community maintained hardware plugin for vLLM on Ascend☆1,805Updated this week
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆164Jan 12, 2026Updated 2 months ago
- XRM (Xilinx FPGA Resource Manager) Document:☆25Nov 13, 2023Updated 2 years ago
- ☆19Dec 6, 2023Updated 2 years ago
- ☆68Jul 8, 2025Updated 8 months ago
- List of papers about TTS / Список статей о TTS☆10Dec 16, 2017Updated 8 years ago
- ☆167Jul 5, 2023Updated 2 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆97Mar 26, 2025Updated last year
- PSTensor provides a way to hack the memory management of tensors in TensorFlow and PyTorch by defining your own C++ Tensor Class.☆10Feb 10, 2022Updated 4 years ago
- Depict GPU memory footprint during DNN training of PyTorch☆11Nov 17, 2022Updated 3 years ago
- ☆11Jan 21, 2021Updated 5 years ago
- Ring attention implementation with flash attention☆998Sep 10, 2025Updated 6 months ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆124Dec 18, 2023Updated 2 years ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,302May 16, 2023Updated 2 years ago