catid / bitnet_cpu
Experiments with BitNet inference on CPU
☆53Updated last year
Alternatives and similar repositories for bitnet_cpu:
Users that are interested in bitnet_cpu are comparing it to the libraries listed below
- Course Project for COMP4471 on RWKV☆17Updated last year
- RWKV-7: Surpassing GPT☆83Updated 5 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated last week
- Fine-tunes a student LLM using teacher feedback for improved reasoning and answer quality. Implements GRPO with teacher-provided evaluati…☆41Updated 2 months ago
- ☆49Updated last year
- Train your own small bitnet model☆70Updated 6 months ago
- GGML implementation of BERT model with Python bindings and quantization.☆56Updated last year
- ☆19Updated this week
- QuIP quantization☆52Updated last year
- Inference RWKV v7 in pure C.☆33Updated last month
- https://x.com/BlinkDL_AI/status/1884768989743882276☆27Updated this week
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- Inference of Mamba models in pure C☆188Updated last year
- RWKV, in easy to read code☆72Updated last month
- python bindings for symphonia/opus - read various audio formats from python and write opus files☆58Updated last week
- ☆46Updated 9 months ago
- Trying to deconstruct RWKV in understandable terms☆14Updated last year
- Inference Llama/Llama2/Llama3 Modes in NumPy☆20Updated last year
- ☆50Updated 6 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Implementation of the Mamba SSM with hf_integration.☆56Updated 8 months ago
- Audio tokenization, in the fastest way possible!☆51Updated 8 months ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆71Updated 3 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 8 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆63Updated last week
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆42Updated 11 months ago
- A simple, hackable text-to-speech system in PyTorch and MLX☆154Updated 2 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆98Updated last month