thu-ml / Zhusuan-PaddlePaddleLinks
Zhusuan with backend PaddlePaddle
☆8Updated 3 years ago
Alternatives and similar repositories for Zhusuan-PaddlePaddle
Users that are interested in Zhusuan-PaddlePaddle are comparing it to the libraries listed below
Sorting:
- An object detection codebase based on MegEngine.☆28Updated 2 years ago
- TVMScript kernel for deformable attention☆25Updated 3 years ago
- ☆13Updated 2 years ago
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Updated 2 years ago
- A codebase & model zoo for pretrained backbone based on MegEngine.☆33Updated 2 years ago
- Large-batch Optimization for Dense Visual Predictions (NeurIPS 2022)☆57Updated 2 years ago
- This is a repo for my training cuda code.☆9Updated 4 years ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated last year
- A mxnet object detection library contains implementations of RFCN, FCOS, RetinaNet, OpenPose, etc..☆31Updated 4 years ago
- PyTorch Dataset Rank Dataset☆43Updated 4 years ago
- OneFlow->ONNX☆43Updated 2 years ago
- Trans different platform's network to International Representation(IR)☆44Updated 7 years ago
- Training LLaMA language model with MMEngine! It supports LoRA fine-tuning!☆40Updated 2 years ago
- ☆11Updated last year
- Android demo for dabnn☆20Updated 5 years ago
- Benchmarking Attention Mechanism in Vision Transformers.☆18Updated 2 years ago
- useful dotfiles included vim, zsh, tmux and vscode☆18Updated last week
- ☆11Updated last year
- ☆36Updated 2 years ago
- ☆60Updated 11 months ago
- ☆15Updated 2 years ago
- ☆17Updated 5 months ago
- ☆24Updated 2 years ago
- Datasets, Transforms and Models specific to Computer Vision☆85Updated last year
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆71Updated 2 years ago
- You are welcomed to join us!☆50Updated 4 years ago
- Code for "Searching for Efficient Multi-Stage Vision Transformers"☆63Updated 3 years ago
- 🤗CacheDiT: A Training-free and Easy-to-use Cache Acceleration Toolbox for Diffusion Transformers🔥☆61Updated this week
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆35Updated 2 years ago
- ☆12Updated 3 years ago