Browse free open source Python LLM Inference Tools and projects below. Use the toggles on the left to filter open source Python LLM Inference Tools by OS, license, language, programming language, and project status.
Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
Ready-to-use OCR with 80+ supported languages
AIMET is a library that provides advanced quantization and compression
FlashInfer: Kernel Library for LLM Serving
Uncover insights, surface problems, monitor, and fine tune your LLM
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Lightweight anchor-free object detection model
The official Python client for the Huggingface Hub
Operating LLMs in production
Replace OpenAI GPT with another LLM in your app
MII makes low-latency and high-throughput inference possible
20+ high-performance LLMs with recipes to pretrain, finetune at scale
State-of-the-art Parameter-Efficient Fine-Tuning
A library for accelerating Transformer models on NVIDIA GPUs
Multilingual Automatic Speech Recognition with word-level timestamps
A set of Docker images for training and serving models in TensorFlow
Library for OCR-related tasks powered by Deep Learning
Standardized Serverless ML Inference Platform on Kubernetes
Everything you need to build state-of-the-art foundation models
An easy-to-use LLMs quantization package with user-friendly apis
PyTorch library of curated Transformer models and their components
GPU environment management and cluster orchestration
Low-latency REST API for serving text-embeddings
Easiest and laziest way for building multi-agent LLMs applications