sekstini / basedxl
☆16Updated last year
Alternatives and similar repositories for basedxl:
Users that are interested in basedxl are comparing it to the libraries listed below
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- ☆46Updated 8 months ago
- QuIP quantization☆52Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 11 months ago
- ☆19Updated 5 months ago
- ☆27Updated 7 months ago
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆55Updated 3 months ago
- BigKnow2022: Bringing Language Models Up to Speed☆14Updated last year
- DPO, but faster 🚀☆40Updated 3 months ago
- GoldFinch and other hybrid transformer components☆45Updated 8 months ago
- ☆49Updated last year
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 6 months ago
- Writing FLUX in Triton☆32Updated 6 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated this week
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 7 months ago
- Make triton easier☆47Updated 9 months ago
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆18Updated 4 months ago
- implementation of https://arxiv.org/pdf/2312.09299☆20Updated 8 months ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Triton kernels for Flux☆20Updated 2 months ago
- RWKV-7: Surpassing GPT☆81Updated 4 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- ☆26Updated last year
- ☆32Updated 4 months ago
- Here we will test various linear attention designs.☆60Updated 10 months ago
- FlexAttention w/ FlashAttention3 Support☆26Updated 5 months ago