satabios / sconceLinks
E2E AutoML Model Compression Package
☆45Updated 10 months ago
Alternatives and similar repositories for sconce
Users that are interested in sconce are comparing it to the libraries listed below
Sorting:
- Collection of autoregressive model implementation☆85Updated 3 weeks ago
- Work in progress.☆79Updated 2 months ago
- ☆118Updated last month
- Explore training for quantized models☆26Updated 6 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 9 months ago
- A collection of lightweight interpretability scripts to understand how LLMs think☆89Updated 2 weeks ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 10 months ago
- H-Net Dynamic Hierarchical Architecture☆81Updated 4 months ago
- RWKV-7: Surpassing GPT☆104Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated 2 years ago
- ☆47Updated 2 years ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆84Updated 2 months ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆174Updated 3 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆128Updated 4 months ago
- The evaluation framework for training-free sparse attention in LLMs☆117Updated 2 weeks ago
- working implimention of deepseek MLA☆45Updated last year
- PB-LLM: Partially Binarized Large Language Models☆157Updated 2 years ago
- ☆92Updated last year
- DPO, but faster 🚀☆47Updated last year
- A collection of tricks and tools to speed up transformer models☆194Updated last month
- Official implementation for Training LLMs with MXFP4☆118Updated 9 months ago
- 📄Small Batch Size Training for Language Models☆80Updated 4 months ago
- Supporting code for the blog post on modular manifolds.☆115Updated 4 months ago
- Samples of good AI generated CUDA kernels☆99Updated 8 months ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated last year
- Experiment of using Tangent to autodiff triton☆82Updated 2 years ago
- Personal solutions to the Triton Puzzles☆20Updated last year
- ☆45Updated 8 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆71Updated this week