luyug / magixLinks
Supercharge huggingface transformers with model parallelism.
☆77Updated 5 months ago
Alternatives and similar repositories for magix
Users that are interested in magix are comparing it to the libraries listed below
Sorting:
- ☆69Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- Utilities for Training Very Large Models☆58Updated last year
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆60Updated 6 months ago
- minimal pytorch implementation of bm25 (with sparse tensors)☆104Updated 2 months ago
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆66Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago
- ☆53Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆80Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 3 months ago
- PyTorch implementation for MRL☆20Updated last year
- ☆17Updated 9 months ago
- ☆48Updated last year
- ☆59Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆78Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- ☆77Updated last year
- Code for Zero-Shot Tokenizer Transfer☆142Updated last year
- Repository for "I am a Strange Dataset: Metalinguistic Tests for Language Models"☆45Updated 2 years ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- Code repo for "Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers" (ACL 2023)☆22Updated 2 years ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆67Updated 3 months ago
- Codebase accompanying the Summary of a Haystack paper.☆80Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆113Updated 2 months ago
- Official implementation of "BERTs are Generative In-Context Learners"☆32Updated 10 months ago
- ☆86Updated last year
- ☆39Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆15Updated 2 years ago
- Aioli: A unified optimization framework for language model data mixing☆32Updated 11 months ago