yifanlu0227 / LLaMA2-7B-on-laptop
Lab 5 project of MIT-6.5940, deploying LLaMA2-7B-chat on one's laptop with TinyChatEngine.
β16Updated last year
Alternatives and similar repositories for LLaMA2-7B-on-laptop:
Users that are interested in LLaMA2-7B-on-laptop are comparing it to the libraries listed below
- hands on model tuning with TVM and profile it on a Mac M1, x86 CPU, and GTX-1080 GPU.β45Updated last year
- β19Updated last year
- πAutomatically Update circult-eda-mlsys-tinyml Papers Daily using Github Actions (Update Every 8th hours)β10Updated this week
- Summary of some awesome work for optimizing LLM inferenceβ66Updated this week
- β61Updated 4 months ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)β49Updated 10 months ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)β13Updated 8 months ago
- All Homeworks for TinyML and Efficient Deep Learning Computing 6.5940 β’ Fall β’ 2023 β’ https://efficientml.aiβ162Updated last year
- β94Updated last year
- β53Updated 11 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.β109Updated 10 months ago
- Optimize softmax in triton in many casesβ20Updated 6 months ago
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.β23Updated this week
- Solution of Programming Massively Parallel Processorsβ42Updated last year
- β100Updated 3 weeks ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)β32Updated 3 months ago
- β129Updated 8 months ago
- LLM Inference analyzer for different hardware platformsβ54Updated last week
- β48Updated 2 months ago
- Examples of CUDA implementations by Cutlass CuTeβ148Updated last month
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.β78Updated 9 months ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Coresβ50Updated last year
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)β119Updated 8 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMsβ100Updated 3 months ago
- Tutorials of Extending and importing TVM with CMAKE Include dependency.β13Updated 5 months ago
- Implement Flash Attention using Cute.β74Updated 3 months ago
- β160Updated last year
- Optimize GEMM with tensorcore step by stepβ24Updated last year
- Hands-On Practical MLIR Tutorialβ20Updated 8 months ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.β86Updated 2 years ago