Zyphra / zcookbook
Training hybrid models for dummies.
☆20Updated last month
Alternatives and similar repositories for zcookbook:
Users that are interested in zcookbook are comparing it to the libraries listed below
- implementation of https://arxiv.org/pdf/2312.09299☆20Updated 8 months ago
- Latent Large Language Models☆17Updated 6 months ago
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆25Updated 3 months ago
- A library for simplifying fine tuning with multi gpu setups in the Huggingface ecosystem.☆16Updated 4 months ago
- A new way to generate large quantities of high quality synthetic data (on par with GPT-4), with better controllability, at a fraction of …☆22Updated 5 months ago
- Implementation of Spectral State Space Models☆16Updated last year
- Minimum Description Length probing for neural network representations☆19Updated last month
- Exploration using DSPy to optimize modules to maximize performance on the OpenToM dataset☆14Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 11 months ago
- An example implementation of RLHF (or, more accurately, RLAIF) built on MLX and HuggingFace.☆25Updated 8 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated last month
- Rust bindings for CTranslate2☆14Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆10Updated 5 months ago
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Updated 4 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆28Updated last week
- ☆19Updated this week
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated 11 months ago
- ☆15Updated 5 months ago
- Aioli: A unified optimization framework for language model data mixing☆22Updated last month
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆29Updated 5 months ago
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Updated 9 months ago
- alternative way to calculating self attention☆18Updated 9 months ago
- See https://github.com/cuda-mode/triton-index/ instead!☆11Updated 10 months ago
- ☆20Updated 4 months ago
- Nexusflow function call, tool use, and agent benchmarks.☆19Updated 2 months ago