Training and deploying machine learning models on Amazon SageMaker
Powering Amazon custom machine learning chips
Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
The official Python client for the Huggingface Hub
Everything you need to build state-of-the-art foundation models
FlashInfer: Kernel Library for LLM Serving
Single-cell analysis in Python
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
Uplift modeling and causal inference with machine learning algorithms
A Pythonic framework to simplify AI service building
Operating LLMs in production
Gaussian processes in TensorFlow
Uncover insights, surface problems, monitor, and fine tune your LLM
A unified framework for scalable computing
AIMET is a library that provides advanced quantization and compression
DoWhy is a Python library for causal inference
Adversarial Robustness Toolbox (ART) - Python Library for ML security
State-of-the-art Parameter-Efficient Fine-Tuning
Optimizing inference proxy for LLMs
Integrate, train and manage any AI models and APIs with your database
GPU environment management and cluster orchestration
MII makes low-latency and high-throughput inference possible
Easiest and laziest way for building multi-agent LLMs applications
Pytorch domain library for recommendation systems