satabios / sconceLinks
E2E AutoML Model Compression Package
β46Updated 7 months ago
Alternatives and similar repositories for sconce
Users that are interested in sconce are comparing it to the libraries listed below
Sorting:
- Collection of autoregressive model implementationβ86Updated 5 months ago
- πSmall Batch Size Training for Language Modelsβ63Updated last week
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languagβ¦β99Updated this week
- NanoGPT-speedrunning for the poor T4 enjoyersβ72Updated 5 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersβ130Updated 10 months ago
- Work in progress.β74Updated 3 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)β66Updated 6 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"β85Updated last month
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"β154Updated 11 months ago
- Experiment of using Tangent to autodiff tritonβ80Updated last year
- β100Updated last month
- The evaluation framework for training-free sparse attention in LLMsβ101Updated 3 months ago
- β69Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmindβ129Updated last year
- Samples of good AI generated CUDA kernelsβ91Updated 4 months ago
- β46Updated last year
- H-Net Dynamic Hierarchical Architectureβ80Updated last month
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundryβ42Updated last year
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog poβ¦β92Updated 2 years ago
- β89Updated last year
- β91Updated last year
- β28Updated last year
- β102Updated 2 months ago
- Fork of Flame repo for training of some new stuff in developmentβ18Updated this week
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIMβ59Updated last year
- β58Updated last year
- RWKV-7: Surpassing GPTβ97Updated 10 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"β38Updated 4 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.β45Updated last year
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Roβ¦β47Updated last month