kyegomez / GeminiLinks
The open source implementation of Gemini, the model that will "eclipse ChatGPT" by Google
β462Updated 2 weeks ago
Alternatives and similar repositories for Gemini
Users that are interested in Gemini are comparing it to the libraries listed below
Sorting:
- β223Updated last year
- [ICLR-2025-SLLM Spotlight π₯]MobiLlama : Small Language Model tailored for edge devicesβ650Updated 2 months ago
- Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"β706Updated last year
- Mamba-Chat: A chat LLM based on the state-space model architecture πβ927Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRAβ628Updated last year
- β710Updated last year
- ποΈ + π¬ + π§ = π€ Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]β624Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ750Updated last year
- β447Updated last year
- Repository for organizing datasets and papers used in Open LLM.β99Updated 2 years ago
- An open-source implementation of Google's PaLM modelsβ820Updated last year
- From scratch implementation of a vision language model in pure PyTorchβ227Updated last year
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attentionβ¦β290Updated last year
- Embed arbitrary modalities (images, audio, documents, etc) into large language models.β185Updated last year
- Effort to open-source NLLB checkpoints.β452Updated last year
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)β240Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ888Updated 2 months ago
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AIβ767Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retrainingβ701Updated last year
- Code for "AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling"β853Updated 10 months ago
- Reaching LLaMA2 Performance with 0.1M Dollarsβ984Updated 11 months ago
- Fine-tuning LLMs using QLoRAβ258Updated last year
- A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplestβ¦β453Updated last month
- Automatically evaluate your LLMs in Google Colabβ649Updated last year