wbrown / gpt_bpeLinks
GPT2 Byte Pair Encoding implementation in Golang
☆24Updated last month
Alternatives and similar repositories for gpt_bpe
Users that are interested in gpt_bpe are comparing it to the libraries listed below
Sorting:
- RWKV (Receptance Weighted Key Value) is a RNN with Transformer-level performance☆41Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Binding to transformers in ggml☆62Updated last month
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 4 years ago
- ☆26Updated 2 years ago
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated 2 years ago
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- ☆18Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- Trying to deconstruct RWKV in understandable terms☆14Updated 2 years ago
- ☆32Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆64Updated 2 years ago
- ☆27Updated last year
- ☆39Updated 2 years ago
- tinygrad port of the RWKV large language model.☆46Updated 3 months ago
- Training code for Sparse Autoencoders on Embedding models☆38Updated 3 months ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- A synthetic story narration dataset to study small audio LMs.☆32Updated last year
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- ☆42Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated last year
- Anh - LAION's multilingual assistant datasets and models☆27Updated 2 years ago
- ☆35Updated 2 years ago
- Doohickey is a stable diffusion tool for technical artists who want to stay up-to-date with the latest developments in the field.☆40Updated 2 years ago
- A library for incremental loading of large PyTorch checkpoints☆56Updated 2 years ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- Fast inference of Instruct tuned LLaMa on your personal devices.☆22Updated 2 years ago
- ☆40Updated 2 years ago