BBuf / RWKV-World-HF-Tokenizer
☆33Updated 5 months ago
Alternatives and similar repositories for RWKV-World-HF-Tokenizer:
Users that are interested in RWKV-World-HF-Tokenizer are comparing it to the libraries listed below
- FuseAI Project☆76Updated last month
- RWKV-7: Surpassing GPT☆71Updated 2 months ago
- A fast RWKV Tokenizer written in Rust☆37Updated 4 months ago
- ☆31Updated 7 months ago
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- A repository for research on medium sized language models.☆76Updated 7 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆96Updated 3 months ago
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated 7 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 5 months ago
- Evaluating LLMs with Dynamic Data☆72Updated 2 months ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆20Updated 2 months ago
- Data preparation code for CrystalCoder 7B LLM☆43Updated 8 months ago
- ☆69Updated this week
- QuIP quantization☆48Updated 10 months ago
- RWKV centralised docs for the community☆20Updated this week
- Official repository for ICML 2024 paper "MoRe Fine-Tuning with 10x Fewer Parameters"☆17Updated last week
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆76Updated last year
- My fork os allen AI's OLMo for educational purposes.☆30Updated last month
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆41Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 2 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆119Updated this week
- ☆16Updated 2 weeks ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- GoldFinch and other hybrid transformer components☆42Updated 5 months ago
- The reproduct of the paper - Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction☆22Updated 7 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated this week
- RWKV, in easy to read code☆61Updated last month
- Here we will test various linear attention designs.☆58Updated 8 months ago
- Linear Attention Sequence Parallelism (LASP)☆74Updated 7 months ago
- ☆49Updated 10 months ago