catid / bitnet_cpuLinks
Experiments with BitNet inference on CPU
☆55Updated last year
Alternatives and similar repositories for bitnet_cpu
Users that are interested in bitnet_cpu are comparing it to the libraries listed below
Sorting:
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated last week
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- GGML implementation of BERT model with Python bindings and quantization.☆54Updated last year
- ☆48Updated 10 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆35Updated last year
- ☆49Updated last year
- Train your own small bitnet model☆71Updated 7 months ago
- RWKV-7: Surpassing GPT☆88Updated 6 months ago
- Inference of Mamba models in pure C☆186Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 8 months ago
- Course Project for COMP4471 on RWKV☆17Updated last year
- ☆62Updated 10 months ago
- Audio tokenization, in the fastest way possible!☆52Updated 9 months ago
- Official code for "F5R-TTS: Improving Flow-Matching based Text-to-Speech with Group Relative Policy Optimization"☆38Updated this week
- Inference Llama/Llama2/Llama3 Modes in NumPy☆21Updated last year
- A fast RWKV Tokenizer written in Rust☆45Updated 2 months ago
- Cerule - A Tiny Mighty Vision Model☆66Updated 8 months ago
- Thin wrapper around GGML to make life easier☆33Updated last week
- Implementation of the Mamba SSM with hf_integration.☆56Updated 9 months ago
- ☆46Updated last week
- Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible…☆35Updated this week
- Video+code lecture on building nanoGPT from scratch☆67Updated 11 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- python bindings for symphonia/opus - read various audio formats from python and write opus files☆64Updated last month
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- ☆17Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloading☆37Updated last year
- ☆35Updated last month
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated last year
- Tokun to can tokens☆17Updated 2 weeks ago