IBM / dolomite-engine
Dolomite Engine is a library for pretraining/finetuning LLMs
☆27Updated this week
Alternatives and similar repositories for dolomite-engine:
Users that are interested in dolomite-engine are comparing it to the libraries listed below
- codebase release for EMNLP2023 paper publication☆19Updated 10 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆64Updated 5 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆78Updated 2 years ago
- Train, tune, and infer Bamba model☆76Updated this week
- ☆48Updated 11 months ago
- PyTorch building blocks for OLMo☆47Updated this week
- Simple and efficient pytorch-native transformer training and inference (batched)☆66Updated 9 months ago
- A toolkit for scaling law research ⚖☆43Updated last month
- ☆46Updated 11 months ago
- ☆38Updated 9 months ago
- Minimum Description Length probing for neural network representations☆18Updated last week
- CausalGym: Benchmarking causal interpretability methods on linguistic tasks☆40Updated last month
- ☆51Updated 7 months ago
- Experiments for efforts to train a new and improved t5☆77Updated 9 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated last year
- A repository for research on medium sized language models.☆76Updated 7 months ago
- ☆49Updated 4 months ago
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆29Updated 3 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- ☆41Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆45Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- ☆20Updated last year
- ☆72Updated 8 months ago
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated last week
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated 7 months ago
- The repository contains code for Adaptive Data Optimization☆21Updated last month