mrsteyk / RWKV-LM-deepspeedLinks
☆42Updated 2 years ago
Alternatives and similar repositories for RWKV-LM-deepspeed
Users that are interested in RWKV-LM-deepspeed are comparing it to the libraries listed below
Sorting:
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆64Updated 2 years ago
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- An experimental implementation of the retrieval-enhanced language model☆74Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated 2 years ago
- Anh - LAION's multilingual assistant datasets and models☆27Updated 2 years ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆41Updated 7 months ago
- GoldFinch and other hybrid transformer components☆45Updated 11 months ago
- LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers☆51Updated 2 years ago
- RWKV model implementation☆38Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- Script and instruction how to fine-tune large RWKV model on your data for Alpaca dataset☆31Updated 2 years ago
- Run ONNX RWKV-v4 models with GPU acceleration using DirectML [Windows], or just on CPU [Windows AND Linux]; Limited to 430M model at this…☆21Updated 2 years ago
- JAX implementations of RWKV☆19Updated last year
- Transformers at any scale☆41Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- MultilingualShareGPT, the free multi-language corpus for LLM training☆72Updated 2 years ago
- Code repository for the c-BTM paper☆106Updated last year
- Instruct-tune LLaMA on consumer hardware☆74Updated 2 years ago
- An unofficial implementation of the Infini-gram model proposed by Liu et al. (2024)☆33Updated last year
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆37Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆60Updated 3 years ago
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆51Updated 2 years ago
- tinygrad port of the RWKV large language model.☆46Updated 3 months ago