tahamajs / Deep_Generative_models_courseLinks
This repository collects lecture slides, assignments (CAs), code notebooks, reports, and reference papers used in the "Deep Generative Models" course (University of Tehran). The materials are organized to be reproducible and educational: each assignment contains an annotated Jupyter notebook, supporting code, and a report.Deep Generative Models
☆16Updated last month
Alternatives and similar repositories for Deep_Generative_models_course
Users that are interested in Deep_Generative_models_course are comparing it to the libraries listed below
Sorting:
- A package dedicated for running benchmark agreement testing☆17Updated 4 months ago
- An Apache 2.0 fork of HuggingFace's Large Language Model Text Generation Inference☆19Updated last year
- Open Source Replication of Anthropic's Alignment Faking Paper☆54Updated 10 months ago
- This project aims to convert the content of GitHub repositories into a structured, machine-readable format, enabling AI models like ChatG…☆12Updated last year
- Fluid Language Model Benchmarking☆26Updated 4 months ago
- [ICML'24] TroVE: Inducing Verifiable and Efficient Toolboxes for Solving Programmatic Tasks☆31Updated last year
- Can Language Models Solve Olympiad Programming?☆123Updated last year
- A framework bridging cognitive science and LLM reasoning research to diagnose and improve how large language models reason, based on anal…☆33Updated 2 months ago
- ☆53Updated last year
- ☆21Updated 3 years ago
- Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner☆30Updated last year
- ☆11Updated 8 months ago
- ☆26Updated 2 months ago
- Advanced Reasoning Benchmark Dataset for LLMs☆47Updated 2 years ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆54Updated last year
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆224Updated last month
- A banchmark list for evaluation of large language models.☆159Updated 3 weeks ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆112Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆124Updated last year
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆47Updated last year
- Persona Vectors: Monitoring and Controlling Character Traits in Language Models☆348Updated 6 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆205Updated last year
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆63Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 11 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆63Updated last year
- A simple evaluation of generative language models and safety classifiers.☆85Updated last month
- Resources for cultural NLP research☆113Updated 4 months ago
- AIRA-dojo: a framework for developing and evaluating AI research agents☆125Updated 2 weeks ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆246Updated last year