kyegomez / GeminiLinks
The open source implementation of Gemini, the model that will "eclipse ChatGPT" by Google
β456Updated 3 weeks ago
Alternatives and similar repositories for Gemini
Users that are interested in Gemini are comparing it to the libraries listed below
Sorting:
- Mamba-Chat: A chat LLM based on the state-space model architecture πβ940Updated last year
- [ICLR-2025-SLLM Spotlight π₯]MobiLlama : Small Language Model tailored for edge devicesβ669Updated 8 months ago
- Build high-performance AI models with modular building blocksβ577Updated this week
- β446Updated last year
- β229Updated 2 years ago
- β1,025Updated 11 months ago
- A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplestβ¦β462Updated 2 months ago
- Fine-tuning LLMs using QLoRAβ266Updated last year
- β715Updated last year
- Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"β715Updated 2 years ago
- Code for fine-tuning Platypus fam LLMs using LoRAβ631Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ936Updated last month
- Embed arbitrary modalities (images, audio, documents, etc) into large language models.β189Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retrainingβ733Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ763Updated last year
- An open-source implementation of Google's PaLM modelsβ819Updated last year
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)β246Updated last year
- OpenGPT 4o is a free alternative to OpenAI GPT 4oβ212Updated last year
- The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modelingβ726Updated last year
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fastβ149Updated last year
- Maybe the new state of the art vision model? we'll see π€·ββοΈβ170Updated 2 years ago
- β414Updated last year
- ποΈ + π¬ + π§ = π€ Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]β635Updated last year
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AIβ773Updated 2 years ago
- Effort to open-source NLLB checkpoints.β473Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollarsβ988Updated last year
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attentionβ¦β294Updated last year
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.β687Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projectionβ1,637Updated last year
- Implementation of I-JEPA from "Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture"β281Updated last year