furlat / OpenBugger
Code to create bugged python scripts for OpenAssistant Training, maintained by https://twitter.com/Cyndesama
☆21Updated last year
Alternatives and similar repositories for OpenBugger:
Users that are interested in OpenBugger are comparing it to the libraries listed below
- ☆31Updated 8 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- ☆38Updated last year
- ☆22Updated last year
- ☆49Updated 11 months ago
- ☆48Updated last year
- ☆48Updated 3 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated 11 months ago
- ☆20Updated last year
- ☆80Updated last month
- ☆34Updated last year
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated last year
- The Next Generation Multi-Modality Superintelligence☆71Updated 5 months ago
- ☆24Updated last year
- Track the progress of LLM context utilisation☆53Updated 7 months ago
- Certified Reasoning with Language Models☆31Updated last year
- Using open source LLMs to build synthetic datasets for direct preference optimization☆57Updated 11 months ago
- Advanced Reasoning Benchmark Dataset for LLMs☆45Updated last year
- This repository contains all the code for collecting large scale amounts of code from GitHub.☆107Updated 2 years ago
- A Collection of Pydantic Models to Abstract IRL☆17Updated 2 months ago
- Pre-training code for CrystalCoder 7B LLM☆55Updated 9 months ago
- Experiments with generating opensource language model assistants☆97Updated last year
- ☆74Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- LLM sampling method for enforcing syntax adherence in generated output☆23Updated last year
- ☆22Updated 8 months ago
- Exploration using DSPy to optimize modules to maximize performance on the OpenToM dataset☆14Updated 11 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 11 months ago
- Chat Markup Language conversation library☆55Updated last year