moomou / listening-with-llmLinks
☆17Updated last year
Alternatives and similar repositories for listening-with-llm
Users that are interested in listening-with-llm are comparing it to the libraries listed below
Sorting:
- ☆63Updated last year
- Cerule - A Tiny Mighty Vision Model☆67Updated last week
- Modified Beam Search with periodical restart☆12Updated last year
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- ☆26Updated 2 years ago
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆29Updated this week
- A simple package for leveraging Falcon 180B and the HF ecosystem's tools, including training/inference scripts, safetensors, integrations…☆11Updated last year
- BH hackathon☆13Updated last year
- The Next Generation Multi-Modality Superintelligence☆69Updated last year
- entropix style sampling + GUI☆27Updated last year
- Finetune any model on HF in less than 30 seconds☆55Updated last month
- Seamless Voice Interactions with LLMs☆12Updated 2 years ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆35Updated last year
- ☆31Updated last year
- Cog wrapper for collabora/WhisperSpeech☆24Updated last year
- ☆26Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆11Updated 2 years ago
- ☆15Updated 6 months ago
- ☆53Updated last year
- implementation of https://arxiv.org/pdf/2312.09299☆21Updated last year
- Generate visual podcasts about novels using open source models☆25Updated 2 years ago
- ☆15Updated 2 years ago
- Zeta implementation of a reusable and plug in and play feedforward from the paper "Exponentially Faster Language Modeling"☆15Updated last year
- Trigger an LLM in your CI/CD to auto-complete your work☆10Updated 2 years ago
- ☆116Updated 11 months ago
- ☆50Updated 2 years ago
- Safely push a Cog model version by making sure it works and is backwards-compatible with previous versions.☆16Updated last week
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- A forest of autonomous agents.☆17Updated 9 months ago