Ying1123 / awesome-neural-symbolicLinks
A list of awesome neural symbolic papers.
☆47Updated 2 years ago
Alternatives and similar repositories for awesome-neural-symbolic
Users that are interested in awesome-neural-symbolic are comparing it to the libraries listed below
Sorting:
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆66Updated 6 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆41Updated last month
- SMT-LIB benchmarks for shape computations from deep learning models in PyTorch☆17Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆79Updated last year
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆18Updated 2 years ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆75Updated last year
- Python package for rematerialization-aware gradient checkpointing☆25Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆74Updated 3 weeks ago
- ☆20Updated 2 years ago
- PyTorch implementation for the Deep Symbolic Simplification Without Human Knowledge☆14Updated 4 years ago
- Compression for Foundation Models☆31Updated 2 months ago
- Sparsity support for PyTorch☆35Updated 2 months ago
- ☆19Updated 2 years ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆45Updated 2 years ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆59Updated 7 months ago
- ☆13Updated 3 weeks ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆116Updated last year
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆28Updated 2 years ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆62Updated 4 months ago
- Automatic differentiation for Triton Kernels☆11Updated 2 months ago
- Personal solutions to the Triton Puzzles☆18Updated 10 months ago
- ☆71Updated 3 weeks ago
- ☆13Updated 6 months ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆45Updated 2 weeks ago
- ☆22Updated 4 years ago
- An Attention Superoptimizer☆21Updated 4 months ago
- ☆93Updated last week
- Best practices for testing advanced Mixtral, DeepSeek, and Qwen series MoE models using Megatron Core MoE.☆17Updated last week
- Odysseus: Playground of LLM Sequence Parallelism☆70Updated 11 months ago