huggingface / transformers_bloom_parallelLinks
Techniques used to run BLOOM at inference in parallel
☆37Updated 3 years ago
Alternatives and similar repositories for transformers_bloom_parallel
Users that are interested in transformers_bloom_parallel are comparing it to the libraries listed below
Sorting:
- Pipeline for pulling and processing online language model pretraining data from the web☆177Updated 2 years ago
- Repository for analysis and experiments in the BigCode project.☆128Updated last year
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- ☆66Updated 3 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- An experimental implementation of the retrieval-enhanced language model☆75Updated 3 years ago
- A framework for few-shot evaluation of autoregressive language models.☆105Updated 2 years ago
- ☆78Updated 2 years ago
- Scalable PaLM implementation of PyTorch☆189Updated 3 years ago
- Python tools for processing the stackexchange data dumps into a text dataset for Language Models☆86Updated 2 years ago
- ☆72Updated 2 years ago
- Minimal code to train a Large Language Model (LLM).☆171Updated 3 years ago
- Code repository for the c-BTM paper☆108Updated 2 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆210Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- ARCHIVED. Please use https://docs.adapterhub.ml/huggingface_hub.html || 🔌 A central repository collecting pre-trained adapter modules☆68Updated last year
- ☆65Updated 2 years ago
- ☆19Updated 3 years ago
- Anh - LAION's multilingual assistant datasets and models☆27Updated 2 years ago
- The pipeline for the OSCAR corpus☆175Updated 2 months ago
- This project studies the performance and robustness of language models and task-adaptation methods.☆155Updated last year
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆183Updated 3 years ago
- ☆102Updated 3 years ago
- ☆184Updated 2 years ago
- ☆67Updated 3 years ago
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆99Updated 2 years ago
- ☆98Updated 2 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
- ☆93Updated 3 years ago