microsoft / encoder-decoder-slm
Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and vision-language capabilities
☆23Updated 3 months ago
Alternatives and similar repositories for encoder-decoder-slm
Users that are interested in encoder-decoder-slm are comparing it to the libraries listed below
Sorting:
- This repo is based on https://github.com/jiaweizzhao/GaLore☆27Updated 7 months ago
- ☆33Updated 10 months ago
- Official implementation of "BERTs are Generative In-Context Learners"☆27Updated 2 months ago
- XTR: Rethinking the Role of Token Retrieval in Multi-Vector Retrieval☆51Updated 10 months ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆48Updated last week
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated last month
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆30Updated 2 months ago
- ☆48Updated 6 months ago
- ☆47Updated 8 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 3 weeks ago
- Experiments for efforts to train a new and improved t5☆77Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆63Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 8 months ago
- ☆78Updated 8 months ago
- ☆25Updated last year
- ☆43Updated 3 months ago
- Code for NeurIPS LLM Efficiency Challenge☆57Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- ☆56Updated last week
- ☆47Updated 6 months ago
- ☆51Updated 6 months ago
- Code repository for the paper "MrT5: Dynamic Token Merging for Efficient Byte-level Language Models."☆40Updated last month
- ☆81Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆103Updated last month
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 9 months ago
- BPE modification that implements removing of the intermediate tokens during tokenizer training.☆25Updated 5 months ago
- Prune transformer layers☆69Updated 11 months ago