EQ-bench / EQ-BenchLinks
A benchmark for emotional intelligence in large language models
☆302Updated 10 months ago
Alternatives and similar repositories for EQ-Bench
Users that are interested in EQ-Bench are comparing it to the libraries listed below
Sorting:
- ☆157Updated 10 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆239Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆239Updated 3 months ago
- Official repo for "Make Your LLM Fully Utilize the Context"☆252Updated last year
- autologic is a Python package that implements the SELF-DISCOVER framework proposed in the paper SELF-DISCOVER: Large Language Models Self…☆57Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆221Updated 7 months ago
- A fast batching API to serve LLM models☆181Updated last year
- 🤗 Benchmark Large Language Models Reliably On Your Data☆318Updated last week
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆479Updated 9 months ago
- ☆309Updated 11 months ago
- Automatic evals for LLMs☆407Updated this week
- Banishing LLM Hallucinations Requires Rethinking Generalization☆276Updated 10 months ago
- ☆121Updated last month
- Merge Transformers language models by use of gradient parameters.☆207Updated 9 months ago
- A bagel, with everything.☆320Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆151Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆167Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 4 months ago
- ☆291Updated 2 months ago
- A simple unified framework for evaluating LLMs☆215Updated last month
- ☆157Updated 9 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆153Updated last year
- ☆114Updated 5 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆104Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆302Updated last year
- GRadient-INformed MoE☆263Updated 8 months ago
- Code for Husky, an open-source language agent that solves complex, multi-step reasoning tasks. Husky v1 addresses numerical, tabular and …☆345Updated 11 months ago
- ☆120Updated 9 months ago