RyokoAI / BigKnow2022
BigKnow2022: Bringing Language Models Up to Speed
☆15Updated 2 years ago
Alternatives and similar repositories for BigKnow2022:
Users that are interested in BigKnow2022 are comparing it to the libraries listed below
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- ☆20Updated 11 months ago
- Using FlexAttention to compute attention with different masking patterns☆43Updated 7 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- ☆11Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆36Updated last year
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆30Updated 2 weeks ago
- Here we will test various linear attention designs.☆60Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 8 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- ☆17Updated last week
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated 11 months ago
- A repository for research on medium sized language models.☆76Updated 11 months ago
- A large-scale RWKV v6, v7(World, ARWKV, PRWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy o…☆35Updated last week
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Updated 2 years ago
- RWKV model implementation☆37Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 6 months ago
- ☆33Updated 10 months ago
- Repository for Skill Set Optimization☆12Updated 9 months ago
- ☆32Updated last year
- Official repository for ICML 2024 paper "MoRe Fine-Tuning with 10x Fewer Parameters"☆18Updated this week
- Official Repository for Efficient Linear-Time Attention Transformers.☆18Updated 11 months ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Updated 3 years ago
- ☆53Updated 10 months ago
- sigma-MoE layer☆18Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Image Diffusion block merging technique applied to transformers based Language Models.☆54Updated 2 years ago
- ☆27Updated last year
- ☆34Updated last week