radha-patel / SySTeCLinks
Performant kernels for symmetric tensors
☆16Updated last year
Alternatives and similar repositories for SySTeC
Users that are interested in SySTeC are comparing it to the libraries listed below
Sorting:
- A performance library for machine learning applications.☆185Updated 2 years ago
- Official repository for K-EXAONE built by LG AI Research☆45Updated this week
- Automatically add academic citations to your LaTeX documents in Overleaf.☆53Updated last week
- "Learning-based One-line intelligence Owner Network Connectivity Tool"☆16Updated 2 years ago
- ☆19Updated last year
- Command-line utility for monitoring GPU hardware.☆106Updated last week
- A hackable, simple, and reseach-friendly GRPO Training Framework with high speed weight synchronization in a multinode environment.☆35Updated 4 months ago
- 1-Click is all you need.☆63Updated last year
- Make your Generative AI LM model from the scratch (Including pretraining / SFT with LoRA)☆16Updated 10 months ago
- ☆26Updated 11 months ago
- Study parallel programming - CUDA, OpenMP, MPI, Pthread☆61Updated 3 years ago
- ☆24Updated 7 months ago
- OSLO: Open Source for Large-scale Optimization☆175Updated 2 years ago
- 🔮 LLM GPU Calculator☆21Updated 2 years ago
- ☆90Updated last year
- CUDA based GPU Programming☆39Updated last year
- ☆103Updated 2 years ago
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated last year
- 삼각형의 실전! Triton☆16Updated last year
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆147Updated 2 months ago
- ☆56Updated last year
- It shows a problem solver based on agentic workflow.☆16Updated 10 months ago
- ☆52Updated last year
- ☆32Updated last year
- BERT score for text generation☆12Updated 11 months ago
- It shows how to use model-context-protocol.☆39Updated this week
- FriendliAI Model Hub☆92Updated 3 years ago
- ☆64Updated 5 months ago
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to …☆130Updated last year
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20Updated last year