cisco-open / ResponsibleAILinks
RAI is a python library that is written to help AI developers in various aspects of responsible AI development.
☆58Updated 11 months ago
Alternatives and similar repositories for ResponsibleAI
Users that are interested in ResponsibleAI are comparing it to the libraries listed below
Sorting:
- ☆12Updated last year
- A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podc…☆74Updated last week
- ChatBot App built using LangChain and Lightning AI☆18Updated 2 years ago
- Chat with various documents - XLSX, DOCX, PPTX, CSV, PDF, TXT - using chatGPT 4 Turbo and Langchain☆48Updated 2 months ago
- Course on LLMs: Building Personalized Customer Chatbots •☆29Updated last year
- Prompt Engineering for Large Language Models - Notebooks, Demos, Exercises, and Projects☆23Updated last year
- Open-source datasets for anyone interested in working with network anomaly based machine learning, data science and research☆120Updated 2 years ago
- 🤗 Disaggregators: Curated data labelers for in-depth analysis.☆66Updated 2 years ago
- Framework for building and maintaining self-updating prompts for LLMs☆63Updated 11 months ago
- Code and Documentation for ML Anywhere project.☆17Updated 3 years ago
- you.com's framework for evaluating deep research systems.☆13Updated 3 weeks ago
- Supervised instruction finetuning for LLM with HF trainer and Deepspeed☆35Updated last year
- A JupyterLab extension for tracking, managing, and comparing Responsible AI mitigations and experiments.☆45Updated 2 years ago
- ☆16Updated 11 months ago
- meta_llama_2finetuned_text_generation_summarization☆21Updated last year
- A personal knowledge base that I can dump information to and help me learn☆24Updated last week
- Client interface to Cleanlab Studio and the Trustworthy Language Model☆32Updated 3 months ago
- Repository containing awesome resources regarding Hugging Face tooling.☆47Updated last year
- Sample notebooks and prompts for LLM evaluation☆131Updated this week
- Fiddler Auditor is a tool to evaluate language models.☆181Updated last year
- Research notes and extra resources for all the work at explodinggradients.com☆23Updated 2 months ago
- Awesome Orchest projects, both official and submitted by the community.☆25Updated last year
- A repository that showcases how you can use ZenML with Git☆69Updated 3 weeks ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated 10 months ago
- MirrorDataGenerator is a python tool that generates synthetic data based on user-specified causal relations among features in the data. I…☆23Updated 2 years ago
- The Foundation Model Transparency Index☆79Updated last year
- A collection of fine-tuning notebooks!☆27Updated last year
- Example code and notebooks related to mlflow, llmops, etc.☆43Updated 11 months ago
- Test LLMs automatically with Giskard and CI/CD☆30Updated 10 months ago
- Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central …☆47Updated 11 months ago