sradc / pretraining-BERTLinks
Pre-train BERT from scratch, with HuggingFace. Accompanies the blog post: sidsite.com/posts/bert-from-scratch
☆43Updated 8 months ago
Alternatives and similar repositories for pretraining-BERT
Users that are interested in pretraining-BERT are comparing it to the libraries listed below
Sorting:
- ☆94Updated 2 years ago
- Functional local implementations of main model parallelism approaches☆95Updated 2 years ago
- gzip Predicts Data-dependent Scaling Laws☆34Updated last year
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆92Updated 2 years ago
- ☆62Updated 2 years ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆196Updated last year
- Supercharge huggingface transformers with model parallelism.☆78Updated 6 months ago
- Documented and Unit Tested educational Deep Learning framework with Autograd from scratch.☆122Updated last year
- An introduction to LLM Sampling☆79Updated last year
- 📝 Reference-Free automatic summarization evaluation with potential hallucination detection☆103Updated 2 years ago
- ReLM is a Regular Expression engine for Language Models☆107Updated 2 years ago
- Genalog is an open source, cross-platform python package allowing generation of synthetic document images with custom degradations and te…☆44Updated 2 years ago
- Gzip and nearest neighbors for text classification☆57Updated 2 years ago
- Highly commented implementations of Transformers in PyTorch☆138Updated 2 years ago
- experiments with inference on llama☆103Updated last year
- ☆144Updated 2 years ago
- ☆22Updated 2 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- Various handy scripts to quickly setup new Linux and Windows sandboxes, containers and WSL.☆40Updated this week
- Seemless interface of using PyTOrch distributed with Jupyter notebooks☆57Updated 4 months ago
- Evolution Pretraining Fully in Int Formats☆136Updated 2 months ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆116Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆72Updated this week
- Jax like function transformation engine but micro, microjax☆34Updated last year
- Automatic gradient descent☆217Updated 2 years ago
- JAX notebook showing how to LoRA + GPTQ arbitrary models☆10Updated 2 years ago
- ☆56Updated last year
- ☆68Updated last year
- PyTorch implementation for MRL☆21Updated last year
- A comprehensive deep dive into the world of tokens☆227Updated last year