Aleph-Alpha / trigrams
☆54Updated 7 months ago
Alternatives and similar repositories for trigrams:
Users that are interested in trigrams are comparing it to the libraries listed below
- ☆47Updated 7 months ago
- ☆48Updated 5 months ago
- ☆75Updated 7 months ago
- A repository for research on medium sized language models.☆76Updated 10 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated last week
- Set of scripts to finetune LLMs☆37Updated last year
- Simple GRPO scripts and configurations.☆58Updated 2 months ago
- ☆43Updated last year
- ☆33Updated 9 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆55Updated 7 months ago
- ☆79Updated 11 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆86Updated 3 weeks ago
- ☆25Updated last year
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 6 months ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 6 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- RWKV-7: Surpassing GPT☆82Updated 4 months ago
- ☆53Updated last year
- Train, tune, and infer Bamba model☆88Updated 2 months ago
- ☆40Updated 2 months ago
- PyTorch implementation of models from the Zamba2 series.☆179Updated 2 months ago
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆63Updated 11 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 6 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆72Updated 7 months ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year
- Collection of autoregressive model implementation☆85Updated last month
- BPE modification that implements removing of the intermediate tokens during tokenizer training.☆25Updated 4 months ago
- Prune transformer layers☆68Updated 10 months ago