huggingface / transformers_bloom_parallelLinks
Techniques used to run BLOOM at inference in parallel
☆37Updated 2 years ago
Alternatives and similar repositories for transformers_bloom_parallel
Users that are interested in transformers_bloom_parallel are comparing it to the libraries listed below
Sorting:
- Pipeline for pulling and processing online language model pretraining data from the web☆177Updated 2 years ago
- ☆67Updated 3 years ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆209Updated last year
- An experimental implementation of the retrieval-enhanced language model☆76Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆94Updated 2 years ago
- ☆79Updated last year
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- ☆72Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆105Updated 2 years ago
- ☆98Updated 2 years ago
- Repository for analysis and experiments in the BigCode project.☆124Updated last year
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Updated 2 years ago
- Scalable PaLM implementation of PyTorch☆190Updated 2 years ago
- ☆184Updated 2 years ago
- Anh - LAION's multilingual assistant datasets and models☆27Updated 2 years ago
- Tools for managing datasets for governance and training.☆83Updated last month
- ☆180Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆182Updated 2 years ago
- ☆19Updated 2 years ago
- ☆67Updated 3 years ago
- Python tools for processing the stackexchange data dumps into a text dataset for Language Models☆84Updated last year
- Minimal code to train a Large Language Model (LLM).☆172Updated 3 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
- An instruction-based benchmark for text improvements.☆141Updated 2 years ago
- Code repository for the c-BTM paper☆107Updated last year
- ☆101Updated 2 years ago
- Multipack distributed sampler for fast padding-free training of LLMs☆201Updated last year
- ☆155Updated 4 years ago