GeeeekExplorer / cupytorchLinks
A small framework mimics PyTorch using CuPy or NumPy
☆47Updated 3 years ago
Alternatives and similar repositories for cupytorch
Users that are interested in cupytorch are comparing it to the libraries listed below
Sorting:
- A Tight-fisted Optimizer☆50Updated 2 years ago
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- Lion and Adam optimization comparison☆64Updated 2 years ago
- ☆22Updated last year
- ☆19Updated last year
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆69Updated 4 years ago
- ☆32Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆77Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- Code for the paper "Query-Key Normalization for Transformers"☆49Updated 4 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆86Updated 2 years ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 3 months ago
- InsNet Runs Instance-dependent Neural Networks with Padding-free Dynamic Batching.☆67Updated 3 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆63Updated 2 years ago
- Notes of my introduction about NLP in Fudan University☆37Updated 4 years ago
- A *tuned* minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆118Updated 4 years ago
- ☆16Updated last year
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Updated 2 years ago
- [KDD'22] Learned Token Pruning for Transformers☆100Updated 2 years ago
- 📑 Dive into Big Model Training☆114Updated 2 years ago
- Linear Attention Sequence Parallelism (LASP)☆87Updated last year
- patches for huggingface transformers to save memory☆30Updated 4 months ago
- ☆105Updated last year
- 基于Transformer的单模型、多尺度的VAE模型☆57Updated 4 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆33Updated 2 years ago
- differentiable top-k operator☆22Updated 9 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆103Updated last year
- Python下shuffle几百G文件☆33Updated 4 years ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆29Updated last month