ChatGLM.cpp is a C++ implementation of the ChatGLM-6B model, enabling efficient local inference without requiring a Python environment. It is optimized for running on consumer hardware.
Features
- Provides a C++ implementation of ChatGLM-6B
- Supports running models on CPU and GPU
- Optimized for low-memory hardware and edge devices
- Allows quantization for reduced resource consumption
- Works as a lightweight alternative to Python-based inference
- Offers real-time chatbot capabilities
License
MIT LicenseFollow ChatGLM.cpp
Other Useful Business Software
Earn up to 15% annual interest with Nexo.
Generate interest, borrow against your crypto, and trade a range of cryptocurrencies — all in one platform.
Geographic restrictions, eligibility, and terms apply.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of ChatGLM.cpp!