huggingface / distill-bloom-deepspeedLinks
Teacher - student distillation using DeepSpeed
☆19Updated 3 years ago
Alternatives and similar repositories for distill-bloom-deepspeed
Users that are interested in distill-bloom-deepspeed are comparing it to the libraries listed below
Sorting:
- ☆127Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last month
- Transformers at any scale☆41Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆39Updated last year
- SILO Language Models code repository☆83Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆39Updated 11 months ago
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆98Updated 2 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated 2 weeks ago
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆31Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆86Updated 3 years ago
- ☆19Updated 3 years ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆80Updated last year
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆62Updated last year
- ☆48Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- ☆95Updated last year
- Techniques used to run BLOOM at inference in parallel☆37Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- Code for "Democratizing Reasoning Ability: Tailored Learning from Large Language Model", EMNLP 2023☆36Updated last year
- A repository for research on medium sized language models.☆78Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆136Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 4 months ago
- [ICLR 2025 Oral] Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition☆15Updated 11 months ago
- Code for Zero-Shot Tokenizer Transfer☆140Updated 9 months ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 3 months ago