huggingface / m4-logsLinks
M4 experiment logbook
☆58Updated last year
Alternatives and similar repositories for m4-logs
Users that are interested in m4-logs are comparing it to the libraries listed below
Sorting:
- LL3M: Large Language and Multi-Modal Model in Jax☆72Updated last year
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M d…☆206Updated 11 months ago
- ☆65Updated last year
- Multimodal language model benchmark, featuring challenging examples☆173Updated 7 months ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆158Updated last year
- Implementation of the deepmind Flamingo vision-language model, based on Hugging Face language models and ready for training☆167Updated 2 years ago
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆89Updated last year
- ☆85Updated 2 years ago
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆210Updated this week
- ☆104Updated last year
- ☆50Updated last year
- See details in https://github.com/pytorch/xla/blob/r1.12/torch_xla/distributed/fsdp/README.md☆24Updated 2 years ago
- ☆75Updated last year
- An official codebase for paper " CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos (ICCV 23)"☆52Updated last year
- Big-Interleaved-Dataset☆58Updated 2 years ago
- Code repository for the c-BTM paper☆107Updated last year
- Language Quantized AutoEncoders☆108Updated 2 years ago
- Public Inflection Benchmarks☆68Updated last year
- SILO Language Models code repository☆81Updated last year
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆139Updated last year
- Self-Alignment with Principle-Following Reward Models☆162Updated 2 months ago
- ☆84Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆114Updated last year
- ☆149Updated last year
- Easily run PyTorch on multiple GPUs & machines☆46Updated last month
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆130Updated last year
- ☆63Updated 10 months ago
- Scaling Data-Constrained Language Models☆338Updated last month
- Code for Zero-Shot Tokenizer Transfer☆135Updated 6 months ago