kyegomez / OmniByteFormerLinks
OmniByteFormer is a generalized Transformer model that can process any type of data by converting it into byte sequences, bypassing traditional tokenization or specific data-type encodings.
☆13Updated this week
Alternatives and similar repositories for OmniByteFormer
Users that are interested in OmniByteFormer are comparing it to the libraries listed below
Sorting:
- AgentParse is a high-performance parsing library designed to map various structured data formats (such as Pydantic models, JSON, YAML, an…☆14Updated last week
- CogNetX is an advanced, multimodal neural network architecture inspired by human cognition. It integrates speech, vision, and video proce…☆16Updated 2 weeks ago
- Various agents from all of the top agent frameworks to integrate into swarms! Langchain, Griptape, CrewAI, and more!☆13Updated this week
- ☆14Updated last year
- A forest of autonomous agents.☆19Updated 7 months ago
- Mamba R1 represents a novel architecture that combines the efficiency of Mamba's state space models with the scalability of Mixture of Ex…☆22Updated last week
- Transform unstructured documents into actionable, structured data with enterprise-grade precision and reliability, ready for large-scale …☆19Updated 2 weeks ago
- OmegaViT (ΩViT) is a cutting-edge vision transformer architecture that combines multi-query attention, rotary embeddings, state space mod…☆14Updated last week
- An open source replication of the stawberry method that leverages Monte Carlo Search with PPO and or DPO☆30Updated this week
- Implementation of the LDP module block in PyTorch and Zeta from the paper: "MobileVLM: A Fast, Strong and Open Vision Language Assistant …☆15Updated last year
- A framework making it effortless to convert any llm model into a reasoning agent like o1 or DeepSeek's r1☆21Updated last week
- Enhancement in Multimodal Representation Learning.☆40Updated last year
- Implementation of SelfExtend from the paper "LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning" from Pytorch and Zeta☆13Updated 9 months ago
- The Swarm Ecosystem☆23Updated last year
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆56Updated 2 months ago
- Community Implementation of the paper: "Multi-Head Mixture-of-Experts" In PyTorch☆27Updated 2 weeks ago
- Visual RAG using less than 300 lines of code.☆28Updated last year
- A Data Source for Reasoning Embodied Agents☆19Updated last year
- A simple package for leveraging Falcon 180B and the HF ecosystem's tools, including training/inference scripts, safetensors, integrations…☆12Updated last year
- Simple Implementation of TinyGPTV in super simple Zeta lego blocks☆16Updated 9 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Updated last year
- Brainwave is a state-of-the-art neural decoder that transforms electroencephalogram (EEG) and brain signals into multimodal outputs inclu…☆12Updated last week
- Generate High Quality textual or multi-modal datasets with Agents☆18Updated 2 years ago
- Tiktok is an advanced multimedia recommender system that fuses the generative modality-aware collaborative self-augmentation and contrast…☆13Updated 2 years ago
- Code for "Accelerating Training with Neuron Interaction and Nowcasting Networks" [to appear at ICLR 2025]☆20Updated 3 months ago
- A swarm of LLM agents that will help you test, document, and productionize your code!☆18Updated last week
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆22Updated 9 months ago
- A repository for research on medium sized language models.☆78Updated last year
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta☆16Updated 9 months ago
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated this week