varunshenoy / coauthorLinks
Convert natural language to LaTeX within Overleaf using LLMs
☆119Updated 2 years ago
Alternatives and similar repositories for coauthor
Users that are interested in coauthor are comparing it to the libraries listed below
Sorting:
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆70Updated 2 years ago
- ☆94Updated 5 months ago
- ☆75Updated 2 months ago
- Probabilistic LLM evaluations. [CogSci2023; ACL2023]☆73Updated 10 months ago
- ☆61Updated last year
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆40Updated last year
- compute, storage, and networking infra at home☆65Updated last year
- Code repository for the c-BTM paper☆106Updated last year
- Repository for the code and dataset for the paper: "Have LLMs Advanced enough? Towards Harder Problem Solving Benchmarks For Large Langu…☆39Updated last year
- ☆22Updated last year
- A collection of text embedding experiments☆55Updated 2 years ago
- Measuring the situational awareness of language models☆35Updated last year
- Graphical Code Tracer (GCT): Visualize code at lightning speed☆53Updated 10 months ago
- Drive a browser with Cohere☆72Updated 2 years ago
- Latent Diffusion Language Models☆68Updated last year
- A dataset of alignment research and code to reproduce it☆77Updated last year
- Fast inference of Instruct tuned LLaMa on your personal devices.☆22Updated 2 years ago
- Public Inflection Benchmarks☆68Updated last year
- Track the progress of LLM context utilisation☆54Updated last month
- Code associated to papers on superposition (in ML interpretability)☆28Updated 2 years ago
- Simplex Random Feature attention, in PyTorch☆74Updated last year
- Understanding the correlation between different LLM benchmarks☆29Updated last year
- Google Research☆46Updated 2 years ago
- [Added T5 support to TRLX] A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆47Updated 2 years ago
- ☆117Updated 10 months ago
- ☆36Updated 2 years ago
- ☆29Updated last year
- ☆150Updated last year
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆205Updated last week
- Language Models of Code are Few-Shot Commonsense Learners (EMNLP 2022)☆86Updated 2 years ago