Zyphra / zcookbook
Training hybrid models for dummies.
☆15Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for zcookbook
- ☆41Updated 2 weeks ago
- Implementation of Spectral State Space Models☆17Updated 9 months ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆22Updated this week
- Latent Large Language Models☆16Updated 3 months ago
- GoldFinch and other hybrid transformer components☆40Updated 4 months ago
- ☆36Updated 3 months ago
- An example implementation of RLHF (or, more accurately, RLAIF) built on MLX and HuggingFace.☆21Updated 5 months ago
- alternative way to calculating self attention☆18Updated 5 months ago
- See https://github.com/cuda-mode/triton-index/ instead!☆11Updated 6 months ago
- implementation of https://arxiv.org/pdf/2312.09299☆19Updated 4 months ago
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆28Updated 2 months ago
- ☆15Updated this week
- ☆27Updated 5 months ago
- A library for simplifying fine tuning with multi gpu setups in the Huggingface ecosystem.☆15Updated 3 weeks ago
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Updated 5 months ago
- Understanding how features learned by neural networks evolve throughout training☆31Updated last month
- LLM training in simple, raw C/CUDA☆12Updated last month
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Updated last week
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 7 months ago
- Exploration using DSPy to optimize modules to maximize performance on the OpenToM dataset☆13Updated 8 months ago
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆20Updated 9 months ago
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆23Updated last week
- ☆22Updated 6 months ago
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆30Updated last month
- QLoRA for Masked Language Modeling☆20Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆30Updated 3 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Lightweight tools for quick and easy LLM demo's☆26Updated 2 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32Updated 5 months ago