MrYxJ / calculate-flops.pytorch
The calflops is designed to calculate FLOPs、MACs and Parameters in all various neural networks, such as Linear、 CNN、 RNN、 GCN、Transformer(Bert、LlaMA etc Large Language Model)
☆721Updated 8 months ago
Alternatives and similar repositories for calculate-flops.pytorch:
Users that are interested in calculate-flops.pytorch are comparing it to the libraries listed below
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆782Updated this week
- ☆611Updated last month
- [TMLR 2024] Efficient Large Language Models: A Survey☆1,112Updated 2 weeks ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆595Updated 4 months ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆409Updated 6 months ago
- List of papers related to neural network quantization in recent AI conferences and journals.☆550Updated 2 months ago
- A collection of AWESOME things about mixture-of-experts☆1,067Updated 3 months ago
- Survey Paper List - Efficient LLM and Foundation Models☆240Updated 5 months ago
- Efficient Multimodal Large Language Models: A Survey☆325Updated last week
- A library for calculating the FLOPs in the forward() process based on torch.fx☆99Updated 6 months ago
- Awesome list for LLM pruning.☆209Updated 2 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,358Updated 8 months ago
- A general and accurate MACs / FLOPs profiler for PyTorch models☆599Updated 10 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆445Updated 3 weeks ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆275Updated last week
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆637Updated this week
- The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆280Updated last month
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆180Updated 2 years ago
- A paper list of some recent works about Token Compress for Vit and VLM☆364Updated this week
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆269Updated 2 weeks ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆310Updated this week
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆326Updated last year
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆700Updated last year
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆965Updated 5 months ago
- We introduce a novel approach for parameter generation, named neural network parameter diffusion (p-diff), which employs a standard laten…☆853Updated 2 months ago
- Fast inference from large lauguage models via speculative decoding☆678Updated 6 months ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆157Updated 10 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,020Updated 8 months ago
- Ring attention implementation with flash attention☆707Updated 2 weeks ago
- Lossless Training Speed Up by Unbiased Dynamic Data Pruning☆329Updated 5 months ago