☆77Nov 5, 2024Updated last year
Alternatives and similar repositories for tinychat-tutorial
Users that are interested in tinychat-tutorial are comparing it to the libraries listed below
Sorting:
- ☆176Aug 9, 2023Updated 2 years ago
- TinyChatEngine: On-Device LLM Inference Library☆945Jul 4, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- a single-header math library☆17Nov 7, 2025Updated 4 months ago
- A benchmark for testing memorization abilities of LMs☆22Oct 15, 2024Updated last year
- OneFlow Serving☆20Apr 10, 2025Updated 11 months ago
- Lab 5 project of MIT-6.5940, deploying LLaMA2-7B-chat on one's laptop with TinyChatEngine.☆18Dec 1, 2023Updated 2 years ago
- Recent Advances on MLLM's Reasoning Ability☆26Apr 11, 2025Updated 11 months ago
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 4 months ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Jul 21, 2023Updated 2 years ago
- Vitis 部署加速器工作流介绍☆11Jan 10, 2025Updated last year
- ☆12Mar 13, 2023Updated 3 years ago
- ☆118Nov 17, 2023Updated 2 years ago
- A sparse attention kernel supporting mix sparse patterns☆480Jan 18, 2026Updated 2 months ago
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆25Feb 21, 2025Updated last year
- Implementation of Hyena Hierarchy in JAX☆10Apr 30, 2023Updated 2 years ago
- ☆19May 4, 2023Updated 2 years ago
- All Homeworks for TinyML and Efficient Deep Learning Computing 6.5940 • Fall • 2023 • https://efficientml.ai☆193Dec 2, 2023Updated 2 years ago
- ☆11Dec 23, 2022Updated 3 years ago
- 一个用Apple Metal实现的Llama和通义千问大模型本地推理☆10Apr 26, 2024Updated last year
- ☆11Apr 5, 2021Updated 4 years ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆63Nov 8, 2024Updated last year
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆125Aug 27, 2024Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆95Feb 20, 2026Updated last month
- A Easy-to-understand TensorOp Matmul Tutorial☆409Mar 5, 2026Updated 2 weeks ago
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆15Oct 16, 2023Updated 2 years ago
- Offline RL experiments☆15Oct 1, 2022Updated 3 years ago
- [AAAI2023] NAS-LID: Efficient Neural Architecture Search with Local Intrinsic Dimension☆17Dec 20, 2022Updated 3 years ago
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆54Mar 27, 2024Updated last year
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Oct 5, 2024Updated last year
- ☆15Oct 23, 2023Updated 2 years ago
- ☆150Jan 9, 2025Updated last year
- ☆11Jun 9, 2023Updated 2 years ago
- Transformers components but in Triton☆34May 9, 2025Updated 10 months ago
- Debug print operator for cudagraph debugging☆14Aug 2, 2024Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆19Aug 3, 2025Updated 7 months ago
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.☆27Apr 21, 2025Updated 11 months ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆211Nov 25, 2025Updated 3 months ago
- An external memory allocator example for PyTorch.☆16Aug 10, 2025Updated 7 months ago