MrYxJ / calculate-flops.pytorch
The calflops is designed to calculate FLOPs、MACs and Parameters in all various neural networks, such as Linear、 CNN、 RNN、 GCN、Transformer(Bert、LlaMA etc Large Language Model)
☆557Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for calculate-flops.pytorch
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆729Updated 2 weeks ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆533Updated last week
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆36Updated last week
- A collection of AWESOME things about mixture-of-experts☆964Updated 3 months ago
- ☆572Updated this week
- A library for calculating the FLOPs in the forward() process based on torch.fx☆79Updated 2 months ago
- Awesome list for LLM pruning.☆159Updated last month
- [TMLR 2024] Efficient Large Language Models: A Survey☆1,018Updated this week
- [IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer☆308Updated last year
- List of papers related to neural network quantization in recent AI conferences and journals.☆453Updated last month
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆639Updated last year
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆130Updated 6 months ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆310Updated 2 months ago
- A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including languag…☆152Updated last week
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆167Updated last year
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,325Updated this week
- Causal depthwise conv1d in CUDA, with a PyTorch interface☆317Updated 3 months ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆254Updated 2 months ago
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆451Updated this week
- ☆165Updated 2 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆184Updated 6 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆864Updated last month
- ☆284Updated 7 months ago
- Awesome LLM compression research papers and tools.☆1,177Updated this week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,245Updated 4 months ago
- Collection of papers on state-space models☆551Updated last week
- Survey Paper List - Efficient LLM and Foundation Models☆217Updated last month
- Rotary Transformer☆812Updated 2 years ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆245Updated last year
- This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicit…☆635Updated 3 weeks ago