The repository for the code of the UltraFastBERT paper
☆518Mar 24, 2024Updated 2 years ago
Alternatives and similar repositories for UltraFastBERT
Users that are interested in UltraFastBERT are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A repository for log-time feedforward networks☆224Apr 9, 2024Updated 2 years ago
- FastFeedForward Networks☆20Dec 8, 2023Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆563Dec 28, 2024Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆390Jul 9, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- some common Huggingface transformers in maximal update parametrization (µP)☆88Mar 14, 2022Updated 4 years ago
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,178Oct 8, 2024Updated last year
- Scaling Data-Constrained Language Models☆343Jun 28, 2025Updated 10 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,333Mar 6, 2025Updated last year
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆655Dec 27, 2024Updated last year
- Convolutions for Sequence Modeling☆911Jun 13, 2024Updated last year
- Official PyTorch implementation of QA-LoRA☆146Mar 13, 2024Updated 2 years ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆39Jun 11, 2025Updated 10 months ago
- ☆35Apr 12, 2024Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Official implementation of Half-Quadratic Quantization (HQQ)☆931Feb 26, 2026Updated 2 months ago
- ☆50Mar 14, 2024Updated 2 years ago
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,944Mar 8, 2024Updated 2 years ago
- ☆10Jun 8, 2024Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,155May 8, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,708Apr 17, 2024Updated 2 years ago
- Accessible large language models via k-bit quantization for PyTorch.☆8,168Apr 20, 2026Updated last week
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 7 months ago
- A forest of autonomous agents.☆20Jan 27, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆83Apr 16, 2024Updated 2 years ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,492Updated this week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,204Aug 22, 2025Updated 8 months ago
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆880Aug 20, 2024Updated last year
- Tools for merging pretrained large language models.☆7,023Mar 15, 2026Updated last month
- Understand and test language model architectures on synthetic tasks.☆265Mar 22, 2026Updated last month
- Fast inference engine for Transformer models☆4,457Feb 4, 2026Updated 2 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆252Jun 6, 2025Updated 10 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆3,015Apr 20, 2026Updated last week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,222Jul 11, 2024Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Sep 30, 2024Updated last year
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆18Nov 1, 2023Updated 2 years ago
- Robust recipes to align language models with human and AI preferences☆5,587Apr 8, 2026Updated 3 weeks ago
- Storing long contexts in tiny caches with self-study☆262Mar 23, 2026Updated last month
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,454Updated this week
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆643Mar 4, 2024Updated 2 years ago