The repository for the code of the UltraFastBERT paper
☆519Mar 24, 2024Updated last year
Alternatives and similar repositories for UltraFastBERT
Users that are interested in UltraFastBERT are comparing it to the libraries listed below
Sorting:
- A repository for log-time feedforward networks☆224Apr 9, 2024Updated last year
- FastFeedForward Networks☆20Dec 8, 2023Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆563Dec 28, 2024Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆390Jul 9, 2024Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆87Mar 14, 2022Updated 4 years ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,178Oct 8, 2024Updated last year
- Scaling Data-Constrained Language Models☆342Jun 28, 2025Updated 8 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,322Mar 6, 2025Updated last year
- Convolutions for Sequence Modeling☆912Jun 13, 2024Updated last year
- Official PyTorch implementation of QA-LoRA☆145Mar 13, 2024Updated 2 years ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆39Jun 11, 2025Updated 9 months ago
- ☆13Jan 17, 2024Updated 2 years ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆919Feb 26, 2026Updated 3 weeks ago
- ☆35Apr 12, 2024Updated last year
- ☆50Mar 14, 2024Updated 2 years ago
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,929Mar 8, 2024Updated 2 years ago
- ☆10Jun 8, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,685Apr 17, 2024Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,148May 8, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,052Updated this week
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 6 months ago
- A forest of autonomous agents.☆20Jan 27, 2025Updated last year
- ☆83Apr 16, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,419Mar 5, 2026Updated 2 weeks ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,187Aug 22, 2025Updated 7 months ago
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆877Aug 20, 2024Updated last year
- Tools for merging pretrained large language models.☆6,867Mar 15, 2026Updated last week
- Understand and test language model architectures on synthetic tasks.☆262Mar 16, 2026Updated last week
- Fast inference engine for Transformer models☆4,368Feb 4, 2026Updated last month
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆250Jun 6, 2025Updated 9 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,956Mar 16, 2026Updated last week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,201Jul 11, 2024Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Sep 30, 2024Updated last year
- Robust recipes to align language models with human and AI preferences☆5,527Sep 8, 2025Updated 6 months ago
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆18Nov 1, 2023Updated 2 years ago
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,449Updated this week
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆642Mar 4, 2024Updated 2 years ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆67Apr 24, 2024Updated last year