catid / bitnet_cpuLinks
Experiments with BitNet inference on CPU
☆54Updated last year
Alternatives and similar repositories for bitnet_cpu
Users that are interested in bitnet_cpu are comparing it to the libraries listed below
Sorting:
- GGML implementation of BERT model with Python bindings and quantization.☆58Updated last year
- ☆50Updated last year
- RWKV in nanoGPT style☆196Updated last year
- Collection of autoregressive model implementation☆85Updated 7 months ago
- Inference of Mamba models in pure C☆194Updated last year
- RWKV-7: Surpassing GPT☆100Updated last year
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- Video+code lecture on building nanoGPT from scratch☆68Updated last year
- Tune MPTs☆84Updated 2 years ago
- ☆136Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Python bindings for ggml☆146Updated last year
- new optimizer☆20Updated last year
- Implementation of the Mamba SSM with hf_integration.☆56Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Course Project for COMP4471 on RWKV☆17Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 7 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆61Updated last year
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated this week
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated last month
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆100Updated 6 months ago
- Fast approximate inference on a single GPU with sparsity aware offloading☆39Updated last year
- ☆39Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 5 months ago
- ☆63Updated last year
- Train your own small bitnet model☆75Updated last year
- Latent Large Language Models☆19Updated last year
- ☆52Updated last year
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago