jina-ai / textbook
distill chatGPT coding ability into small model (1b)
☆26Updated last year
Alternatives and similar repositories for textbook:
Users that are interested in textbook are comparing it to the libraries listed below
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆63Updated last year
- Official implementation for "Law of the Weakest Link: Cross capabilities of Large Language Models"☆41Updated 3 months ago
- ☆74Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 2 months ago
- Aioli: A unified optimization framework for language model data mixing☆19Updated last week
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆57Updated 9 months ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆92Updated last year
- Transformers at any scale☆41Updated last year
- ☆31Updated 7 months ago
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".☆27Updated 5 months ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆40Updated 6 months ago
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆31Updated 11 months ago
- Codebase accompanying the Summary of a Haystack paper.☆74Updated 4 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆80Updated 11 months ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆50Updated 3 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- ☆48Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆28Updated 6 months ago
- ☆53Updated 3 months ago
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆69Updated last month
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆28Updated this week
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆76Updated last year
- Advanced Reasoning Benchmark Dataset for LLMs☆45Updated last year
- This is the official repository of the paper "OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI"☆91Updated last month
- Code for our paper Resources and Evaluations for Multi-Distribution Dense Information Retrieval☆14Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences☆66Updated 7 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆77Updated 5 months ago
- ☆64Updated 11 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year