cvs-health / langfairLinks
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
☆232Updated last week
Alternatives and similar repositories for langfair
Users that are interested in langfair are comparing it to the libraries listed below
Sorting:
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆140Updated 3 weeks ago
- This playlab encompasses a multitude of projects crafted through the utilization of Large Language Models, showcasing the versatility and…☆128Updated last month
- Sample notebooks and prompts for LLM evaluation☆138Updated 3 months ago
- A practical RAG where you can download and chat with github repo☆87Updated 7 months ago
- A curated list of awesome synthetic data tools (open source and commercial).☆206Updated last year
- An index of all of our weekly concepts + code events for aspiring AI Engineers and Business Leaders!!☆86Updated last week
- This repository will contain the presentation and python jupyter notebooks for the DataHack Summit 2024 conference talk, Improving Real-w…☆121Updated 11 months ago
- Material for the series of seminars on Large Language Models☆34Updated last year
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆116Updated 5 months ago
- A framework for fine-tuning retrieval-augmented generation (RAG) systems.☆129Updated this week
- EvalAssist is an open-source project that simplifies using large language models as evaluators (LLM-as-a-Judge) of the output of other la…☆77Updated this week
- This repository will contain the presentation and python jupyter notebooks for my DataHack Summit 2025 conference talk, Building Effectiv…☆68Updated 3 weeks ago
- GenAI Experimentation☆57Updated last month
- Fine-tune an LLM to perform batch inference and online serving.☆112Updated 3 months ago
- A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podc…☆84Updated last week
- Mistral + Haystack: build RAG pipelines that rock 🤘☆105Updated last year
- Fiddler Auditor is a tool to evaluate language models.☆187Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated last year
- ☆73Updated 11 months ago
- Optimized Large Language Models for Financial Applications – Efficient, Scalable, and Domain-Specific AI for Finance.☆51Updated 2 months ago
- Following emerging Large Language Model Operations (LLM Ops) best practices in the industry, you’ll learn all about the key technologies …☆280Updated last year
- ☆145Updated last year
- A curated list of materials on AI guardails☆40Updated 3 months ago
- This repo is the central repo for all the RAG Evaluation reference material and partner workshop☆76Updated 4 months ago
- 💻 Decoding ML articles hub: Hands-on articles with code on production-grade ML☆139Updated 6 months ago
- This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation fr…☆18Updated last year
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆114Updated last month
- This repository will contain all the presentations, content, hands-on notebooks for a full day Generative AI workshop on Training, Fine-t…☆307Updated 6 months ago
- Set up your local AI-powered dev environment just like professional AI Engineers☆67Updated 2 months ago
- Code files for advanced LLM Course☆214Updated last year