sradc / pretraining-BERTLinks
Pre-train BERT from scratch, with HuggingFace. Accompanies the blog post: sidsite.com/posts/bert-from-scratch
☆41Updated last month
Alternatives and similar repositories for pretraining-BERT
Users that are interested in pretraining-BERT are comparing it to the libraries listed below
Sorting:
- ☆92Updated last year
- gzip Predicts Data-dependent Scaling Laws☆35Updated last year
- ☆61Updated last year
- Your favourite classical machine learning algos on the GPU/TPU☆20Updated 5 months ago
- 📝 Reference-Free automatic summarization evaluation with potential hallucination detection☆100Updated last year
- Simple GRPO scripts and configurations.☆58Updated 4 months ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- ☆22Updated last year
- QLoRA for Masked Language Modeling☆22Updated last year
- Jax like function transformation engine but micro, microjax☆32Updated 8 months ago
- Highly commented implementations of Transformers in PyTorch☆136Updated last year
- Functional local implementations of main model parallelism approaches☆95Updated 2 years ago
- An introduction to LLM Sampling☆78Updated 6 months ago
- ☆55Updated 7 months ago
- ☆53Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆30Updated 9 months ago
- ML/DL Math and Method notes☆61Updated last year
- ☆21Updated 2 months ago
- PyTorch implementation for MRL☆18Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆81Updated 3 years ago
- ☆27Updated 11 months ago
- Simplified implementation of UMAP like dimensionality reduction algorithm☆49Updated 7 months ago
- ☆20Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆56Updated last week
- Implementation of GateLoop Transformer in Pytorch and Jax☆89Updated last year
- ☆81Updated last year
- ☆38Updated 11 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 3 weeks ago
- Latent Diffusion Language Models☆68Updated last year