mustafaaljadery / gemma-2B-10MLinks
Gemma 2B with 10M context length using Infini-attention.
☆947Updated last year
Alternatives and similar repositories for gemma-2B-10M
Users that are interested in gemma-2B-10M are comparing it to the libraries listed below
Sorting:
- OpenCodeInterpreter is a suite of open-source code generation systems aimed at bridging the gap between large language models and sophist…☆1,678Updated last year
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,286Updated 3 weeks ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆988Updated last year
- We introduced a new model designed for the Code generation task. Its test accuracy on the HumanEval base dataset surpasses that of GPT-4 …☆856Updated last year
- Llama-3 agents that can browse the web by following instructions and talking to you☆1,412Updated 8 months ago
- ☆1,085Updated last year
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆686Updated last year
- ☆997Updated 6 months ago
- The first open-source Artificial Narrow Intelligence generalist agentic framework Computer-Using-Agent that fully operates graphical-user…☆1,309Updated 6 months ago
- TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones☆1,297Updated last year
- ☆446Updated last year
- LLM Transparency Tool (LLM-TT), an open-source interactive toolkit for analyzing internal workings of Transformer-based language models. …☆830Updated 8 months ago
- Automate the analysis of GitHub repositories for LLMs with RepoToTextForLLMs. Fetch READMEs, structure, and non-binary files efficiently.…☆771Updated last year
- Training LLMs with QLoRA + FSDP☆1,526Updated 9 months ago
- [ICLR-2025-SLLM Spotlight 🔥]MobiLlama : Small Language Model tailored for edge devices☆658Updated 3 months ago
- Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"☆710Updated last year
- Official inference library for pre-processing of Mistral models☆784Updated this week
- Stats for Custom Chat GPTs not created by OpenAI☆389Updated last year
- ☆864Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,399Updated last year
- The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modeling☆721Updated 9 months ago
- A series of math-specific large language models of our Qwen2 series.☆997Updated 7 months ago
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,315Updated last year
- Port of OpenAI's Whisper model in C/C++ with xtts and wav2lip☆828Updated 3 months ago
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting wit…☆1,084Updated last year
- [NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments☆2,122Updated this week
- Evaluation suite for LLMs☆359Updated last month
- ☆433Updated 10 months ago
- Train Models Contrastively in Pytorch☆741Updated 5 months ago
- multi1: create o1-like reasoning chains with multiple AI providers (and locally). Supports LiteLLM as backend too for 100+ providers at o…☆350Updated 7 months ago