Chat Svc
The chat service keeps a database of threads, messages and file assets associated with them. Chat messages are the primary user interface of LLMs and other AI architectures.
The chat service keeps a database of threads, messages and file assets associated with them. Chat messages are the primary user interface of LLMs and other AI architectures.
The deploy service is responsible of launching containers on whatever infrastructure the Superplatform is running on (eg. Docker Svc) and registering them into the Registry Svc.
The docker service maintains containers on a node. It expects the docker socket to be mounted.
The download service keeps a network local copy of files frequently accessed by services in the Superplatform platform.
The dynamic service is designed to help build backendless applications: the goal is to be able to save and query data directly from the frontend. Similarly to Firebase.
The model service can start, stop AI models across multiple runtimes (eg. Docker) and maintains a database of available models on the platform.
The prompt service provides an easy to use interface to prompt LLMs and use AI models. Aims to serve humans and machines alike with its resilient queue based architecture.