GeeeekExplorer / cupytorchLinks
A small framework mimics PyTorch using CuPy or NumPy
☆44Updated 3 years ago
Alternatives and similar repositories for cupytorch
Users that are interested in cupytorch are comparing it to the libraries listed below
Sorting:
- A Tight-fisted Optimizer☆50Updated 2 years ago
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- Linear Attention Sequence Parallelism (LASP)☆86Updated last year
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆33Updated 2 years ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- ☆32Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated last month
- ☆22Updated last year
- patches for huggingface transformers to save memory☆27Updated 3 months ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆62Updated 2 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆86Updated 2 years ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆102Updated last year
- triton ver of gqa flash attn, based on the tutorial☆12Updated last year
- ☆106Updated last year
- differentiable top-k operator☆22Updated 8 months ago
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆61Updated last week
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- Lion and Adam optimization comparison☆63Updated 2 years ago
- Distributed DataLoader For Pytorch Based On Ray☆24Updated 3 years ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 7 months ago
- A *tuned* minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆117Updated 4 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆76Updated last year
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆69Updated 3 years ago
- [KDD'22] Learned Token Pruning for Transformers☆99Updated 2 years ago
- Notes of my introduction about NLP in Fudan University☆37Updated 4 years ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- ☆15Updated last year
- ☆21Updated 2 weeks ago
- ☆52Updated 2 months ago