LLM360 / Analysis360Links
Open Implementations of LLM Analyses
☆103Updated 7 months ago
Alternatives and similar repositories for Analysis360
Users that are interested in Analysis360 are comparing it to the libraries listed below
Sorting:
- Pre-training code for CrystalCoder 7B LLM☆54Updated last year
- Data preparation code for Amber 7B LLM☆90Updated last year
- Data preparation code for CrystalCoder 7B LLM☆44Updated last year
- Evaluating LLMs with fewer examples☆155Updated last year
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Code accompanying "How I learned to start worrying about prompt formatting".☆105Updated 8 months ago
- ☆97Updated 11 months ago
- Codebase accompanying the Summary of a Haystack paper.☆78Updated 8 months ago
- Reward Model framework for LLM RLHF☆61Updated last year
- ☆38Updated 10 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆135Updated 6 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆110Updated 8 months ago
- Code and data for CoachLM, an automatic instruction revision approach LLM instruction tuning.☆61Updated last year
- 🚢 Data Toolkit for Sailor Language Models☆91Updated 3 months ago
- ☆120Updated 8 months ago
- This is the official repository for Inheritune.☆111Updated 3 months ago
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆96Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆107Updated 2 weeks ago
- a curated list of the role of small models in the LLM era☆100Updated 8 months ago
- Pre-training code for Amber 7B LLM☆166Updated last year
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- ☆61Updated last year
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆123Updated last year
- This project studies the performance and robustness of language models and task-adaptation methods.☆150Updated last year
- DSBench: How Far are Data Science Agents from Becoming Data Science Experts?☆54Updated 3 months ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆108Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 11 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆223Updated 6 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆54Updated last year