huggingface / transformers_bloom_parallel
Techniques used to run BLOOM at inference in parallel
☆37Updated 2 years ago
Alternatives and similar repositories for transformers_bloom_parallel:
Users that are interested in transformers_bloom_parallel are comparing it to the libraries listed below
- Experiments with generating opensource language model assistants☆97Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- ☆67Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- ☆72Updated last year
- ☆67Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- ☆77Updated last year
- ☆19Updated 2 years ago
- ☆38Updated last year
- Repository for analysis and experiments in the BigCode project.☆118Updated last year
- ☆44Updated 5 months ago
- Transformers at any scale☆41Updated last year
- A framework for few-shot evaluation of autoregressive language models.☆103Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆177Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- Anh - LAION's multilingual assistant datasets and models☆27Updated 2 years ago
- Code repository for the c-BTM paper☆106Updated last year
- ☆97Updated last year
- Tools for managing datasets for governance and training.☆85Updated 3 months ago
- An experimental implementation of the retrieval-enhanced language model☆74Updated 2 years ago
- Helper scripts and notes that were used while porting various nlp models☆46Updated 3 years ago
- experiments with inference on llama☆104Updated 11 months ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- ☆87Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Official repo to On the Generalization Ability of Retrieval-Enhanced Transformers☆38Updated 11 months ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP