DOUDOU0314 / GPT-J-hfLinks
GPT-jax based on the official huggingface library
β13Updated 4 years ago
Alternatives and similar repositories for GPT-J-hf
Users that are interested in GPT-J-hf are comparing it to the libraries listed below
Sorting:
- Few Shot Learning using EleutherAI's GPT-Neo an Open-source version of GPT-3β18Updated 4 years ago
- **ARCHIVED** Filesystem interface to π€ Hubβ58Updated 2 years ago
- Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSEβ19Updated 4 years ago
- β33Updated 2 years ago
- URL downloader supporting checkpointing and continuous checksumming.β19Updated 2 years ago
- Calculating Expected Time for training LLM.β38Updated 2 years ago
- TorchServe+Streamlit for easily serving your HuggingFace NER modelsβ33Updated 3 years ago
- Developing tools to automatically analyze datasetsβ75Updated last year
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 3 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorchβ76Updated 3 years ago
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answeringβ17Updated 2 years ago
- Convenient Text-to-Text Training for Transformersβ19Updated 4 years ago
- β13Updated last year
- A library for squeakily cleaning and filtering language datasets.β49Updated 2 years ago
- Megatron LM 11B on Huggingface Transformersβ27Updated 4 years ago
- My explorations into editing the knowledge and memories of an attention networkβ35Updated 3 years ago
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AIβ56Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/β¦β28Updated last year
- Anh - LAION's multilingual assistant datasets and modelsβ27Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β96Updated 2 years ago
- exBERT on Transformersπ€β10Updated 4 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β81Updated 3 years ago
- Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.β32Updated 3 years ago
- [COLM 2024] Early Weight Averaging meets High Learning Rates for LLM Pre-trainingβ18Updated last year
- Consists of the largest (10K) human annotated code-switched semantic parsing dataset & 170K generated utterance using the CST5 augmentatiβ¦β41Updated 2 years ago
- Embedding Recycling for Language modelsβ38Updated 2 years ago
- Training a model without a dataset for natural language inference (NLI)β25Updated 5 years ago
- β37Updated 2 years ago
- Using short models to classify long textsβ21Updated 2 years ago
- β20Updated 4 years ago