krafton-ai / sherlock
demo page of krafton virtual Sherlock
β7Updated last year
Alternatives and similar repositories for sherlock:
Users that are interested in sherlock are comparing it to the libraries listed below
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformersβ56Updated 3 weeks ago
- β23Updated last year
- π» Code and benchmark for our EMNLP 2023 paper - "FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions"β52Updated 8 months ago
- The git repository of Modular Prompted Chatbot paperβ32Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Modelβ42Updated last year
- β10Updated 5 months ago
- β31Updated last year
- About Official PyTorch implementation of "Query-Efficient Black-Box Red Teaming via Bayesian Optimization" (ACL'23)β14Updated last year
- β20Updated last year
- Official PyTorch implementation of "Neural Relation Graph: A Unified Framework for Identifying Label Noise and Outlier Data" (NeurIPS'23)β15Updated last year
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)β53Updated 10 months ago
- β78Updated last year
- β44Updated last year
- [EMNLP Findings 2024 & ACL 2024 NLRSE Oral] Enhancing Mathematical Reasoning in Language Models with Fine-grained Rewardsβ48Updated 9 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entityβ¦β23Updated 11 months ago
- β79Updated 7 months ago
- All-in-one repository for Fine-tuning & Pretraining (Large) Language Modelsβ15Updated last year
- β42Updated last year
- Official implementation of "OffsetBias: Leveraging Debiased Data for Tuning Evaluators"β21Updated 5 months ago
- The repository contains code for Adaptive Data Optimizationβ20Updated 2 months ago
- β51Updated 9 months ago
- Official PyTorch implementation of "Discovering Hierarchical Achievements in Reinforcement Learning via Contrastive Learning" (NeurIPS 20β¦β31Updated 6 months ago
- Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.β39Updated 3 weeks ago
- Language models scale reliably with over-training and on downstream tasksβ96Updated 10 months ago
- β17Updated 4 months ago
- β41Updated last year
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adamβ73Updated 6 months ago
- Model Stock: All we need is just a few fine-tuned modelsβ102Updated 4 months ago