☆79Dec 15, 2023Updated 2 years ago
Alternatives and similar repositories for AscendSpeed
Users that are interested in AscendSpeed are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Tensorflow implementation of DeepMind's Tacotron-2 (without wavenet)☆11Jul 12, 2019Updated 6 years ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitcode.com/Ascend/pytorch☆513Updated this week
- ☆47Dec 13, 2024Updated last year
- ☆19Mar 4, 2025Updated last year
- ☆17Jun 8, 2021Updated 4 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆20Sep 28, 2024Updated last year
- 🌟Official code of our AAAI26 paper 🔍WebFilter☆38Nov 9, 2025Updated 5 months ago
- ☆23Jan 7, 2022Updated 4 years ago
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆40Mar 17, 2024Updated 2 years ago
- ☆220Aug 17, 2023Updated 2 years ago
- ☆50Jul 2, 2023Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,439Mar 20, 2024Updated 2 years ago
- play gemm with tvm☆91Jul 22, 2023Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Jun 12, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- A baseline repository of Auto-Parallelism in Training Neural Networks☆146Jun 25, 2022Updated 3 years ago
- ☆334Jun 24, 2024Updated last year
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Apr 24, 2026Updated 2 weeks ago
- ☆114Aug 26, 2024Updated last year
- Implementation of Global Style Token Tacotron in TensorFlow2☆26Sep 28, 2020Updated 5 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,246Aug 14, 2025Updated 8 months ago
- ☆187Jan 28, 2026Updated 3 months ago
- ☆30Sep 4, 2023Updated 2 years ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Nov 11, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- LLM training technologies developed by kwai☆71Jan 21, 2026Updated 3 months ago
- Community maintained hardware plugin for vLLM on Ascend☆2,035Updated this week
- XRM (Xilinx FPGA Resource Manager) Document:☆25Nov 13, 2023Updated 2 years ago
- ☆19Dec 6, 2023Updated 2 years ago
- ☆70Jul 8, 2025Updated 10 months ago
- A zoneout implemetion based on pytorch☆10Jan 22, 2019Updated 7 years ago
- List of papers about TTS / Список статей о TTS☆10Dec 16, 2017Updated 8 years ago
- ☆167Jul 5, 2023Updated 2 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆46Feb 27, 2025Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆98Mar 26, 2025Updated last year
- Artifact for PPoPP20 "Understanding and Bridging the Gaps in Current GNN Performance Optimizations"☆41Nov 16, 2021Updated 4 years ago
- Depict GPU memory footprint during DNN training of PyTorch☆11Nov 17, 2022Updated 3 years ago
- ☆11Jan 21, 2021Updated 5 years ago
- Ring attention implementation with flash attention☆1,015Sep 10, 2025Updated 7 months ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆124Dec 18, 2023Updated 2 years ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,300May 16, 2023Updated 2 years ago