GeeeekExplorer / cupytorchLinks
A small framework mimics PyTorch using CuPy or NumPy
☆52Updated 3 years ago
Alternatives and similar repositories for cupytorch
Users that are interested in cupytorch are comparing it to the libraries listed below
Sorting:
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- Lion and Adam optimization comparison☆64Updated 2 years ago
- [KDD'22] Learned Token Pruning for Transformers☆102Updated 2 years ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 6 months ago
- 📑 Dive into Big Model Training☆116Updated 3 years ago
- A Tight-fisted Optimizer☆50Updated 2 years ago
- A *tuned* minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆119Updated 4 years ago
- InsNet Runs Instance-dependent Neural Networks with Padding-free Dynamic Batching.☆67Updated 4 years ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 4 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆64Updated 2 years ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆40Updated 2 years ago
- Distributed DataLoader For Pytorch Based On Ray☆24Updated 4 years ago
- Linear Attention Sequence Parallelism (LASP)☆88Updated last year
- 逻辑回归和单层softmax的解析解☆12Updated 4 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- ☆19Updated last year
- ☆79Updated 2 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated 2 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆98Updated 4 months ago
- Models and examples built with OneFlow☆101Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year
- ☆12Updated 2 years ago
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆70Updated 4 years ago
- patches for huggingface transformers to save memory☆32Updated 7 months ago
- triton ver of gqa flash attn, based on the tutorial☆12Updated last year
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆33Updated 2 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆50Updated 4 years ago