The repository for the code of the UltraFastBERT paper
☆518Mar 24, 2024Updated 2 years ago
Alternatives and similar repositories for UltraFastBERT
Users that are interested in UltraFastBERT are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A repository for log-time feedforward networks☆224Apr 9, 2024Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆561Dec 28, 2024Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆390Jul 9, 2024Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆87Mar 14, 2022Updated 4 years ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,178Oct 8, 2024Updated last year
- Scaling Data-Constrained Language Models☆343Jun 28, 2025Updated 9 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,327Mar 6, 2025Updated last year
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆654Dec 27, 2024Updated last year
- Convolutions for Sequence Modeling☆911Jun 13, 2024Updated last year
- Official PyTorch implementation of QA-LoRA☆146Mar 13, 2024Updated 2 years ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆39Jun 11, 2025Updated 10 months ago
- ☆13Jan 17, 2024Updated 2 years ago
- ☆35Apr 12, 2024Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Official implementation of Half-Quadratic Quantization (HQQ)☆925Feb 26, 2026Updated last month
- ☆50Mar 14, 2024Updated 2 years ago
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,940Mar 8, 2024Updated 2 years ago
- ☆10Jun 8, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,690Apr 17, 2024Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,152May 8, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,107Updated this week
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 6 months ago
- A forest of autonomous agents.☆20Jan 27, 2025Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆83Apr 16, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,458Mar 30, 2026Updated 2 weeks ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,194Aug 22, 2025Updated 7 months ago
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆877Aug 20, 2024Updated last year
- Tools for merging pretrained large language models.☆6,945Mar 15, 2026Updated 3 weeks ago
- Understand and test language model architectures on synthetic tasks.☆264Mar 22, 2026Updated 3 weeks ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆251Jun 6, 2025Updated 10 months ago
- Fast inference engine for Transformer models☆4,417Feb 4, 2026Updated 2 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,978Apr 2, 2026Updated last week
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,208Jul 11, 2024Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆101Sep 30, 2024Updated last year
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆18Nov 1, 2023Updated 2 years ago
- Robust recipes to align language models with human and AI preferences☆5,551Apr 2, 2026Updated last week
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,451Updated this week
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆643Mar 4, 2024Updated 2 years ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆67Apr 24, 2024Updated last year