📄️ Chat Svc
The chat service keeps a database of threads, messages and file assets associated with them. Chat messages are the primary user interface of LLMs and other AI architectures.
📄️ Docker Svc
The docker service maintains containers on a node. It expects the docker socket to be mounted.
📄️ Download Svc
The download service keeps a network local copy of files frequently accessed by services in the Superplatform platform.
📄️ Dynamic Svc
The dynamic service is designed to help build backendless applications: the goal is to be able to save and query data directly from the frontend. Similarly to Firebase.
📄️ Model Svc
The model service can start, stop AI models across multiple runtimes (eg. Docker) and maintains a database of available models on the platform.
📄️ Policy Svc
The policy service provides features such as rate limiting of endpoint calls by user ip, user id, organization id and more.
📄️ Prompt Svc
The prompt service provides an easy to use interface to prompt LLMs and use AI models. Aims to serve humans and machines alike with its resilient queue based architecture.
📄️ Registry Svc
The registry service is designed to maintain a database of services, service instances and nodes.
📄️ Deploy Svc
The deploy service is responsible of launching containers on whatever infrastructure the Superplatform is running on (eg. Docker Svc) and registering them into the Registry Svc.
📄️ User Svc
The user service is at the heart of Superplatform, managing users, tokens, organizations, permissions and more. Each service and human on the Superplatform network has an account in the User Svc.