Scikud / AnythingButWrappersLinks
☆13Updated 2 years ago
Alternatives and similar repositories for AnythingButWrappers
Users that are interested in AnythingButWrappers are comparing it to the libraries listed below
Sorting:
- ☆22Updated last year
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆30Updated 8 months ago
- Training and Inference Notebooks for the RedPajama (OpenLlama) models☆17Updated 2 years ago
- ☆46Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆46Updated last year
- ☆41Updated 11 months ago
- Latent Large Language Models☆18Updated 9 months ago
- PyTorch implementation for MRL☆18Updated last year
- ☆29Updated 6 months ago
- QLoRA for Masked Language Modeling☆22Updated last year
- Check for data drift between two OpenAI multi-turn chat jsonl files.☆37Updated last year
- Telemetry for applications that use LLM tools.☆25Updated 2 years ago
- Low-Rank Adaptation of Large Language Models clean implementation☆8Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Explore the use of DSPy for extracting features from PDFs 🔎☆40Updated last year
- A clone of OpenAI's Tokenizer page for HuggingFace Models☆45Updated last year
- ☆23Updated last year
- Run LLMs on Replicate with vLLM☆17Updated 7 months ago
- ☆19Updated last week
- Track the progress of LLM context utilisation☆53Updated last month
- [Added T5 support to TRLX] A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆47Updated 2 years ago
- Comparing retrieval abilities from GPT4-Turbo and a RAG system on a toy example for various context lengths☆35Updated last year
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated 2 years ago
- A reading list of relevant papers and projects on foundation model annotation☆27Updated 3 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆74Updated last week
- Apps that run on modal.com☆12Updated last year
- ☆20Updated 2 months ago
- ☆48Updated last year
- Example code using the DSPy framework.☆18Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year