AI Inference Server

Provided by SIEMENS

Reduce the cost, effort and time to bring AI models to the shop floor and help to focus on your domain expertise

Do you see AI as a potential solution for your current challenges? 
But are you concerned about the high costs and effort to bring AI to the shop floor and maintain it?
Are you interested in standardizing the execution of AI Models on the shop floor and integrate it with monitoring, debugging, and remote deployment capabilities?
  
AI Inference Server is the edge application to standardize AI model execution on Siemens Industrial Edge. The application eases data ingestion, orchestrates data traffic, and is compatible all powerful AI frameworks thanks to the embedded Python interpreter. It enables the AI model deployment as content to a standard edge app instead of custom container deployment for single models. AI finally meets Siemens Industrial Edge and AI Inference Server leverages the benefits of both.

Entitlement
You can choose from several AI Inference Server variants which have different hardware requirements*

AI Inference Server - 1 pipeline
Allows single execution of one pipeline.

AI Inference Server - 3 pipelines
Allows the execution up to three pipelines at the same time.

AI Inference Server GPU accelerated
AI Inference Server GPU accelerated application is taking advantage of the high-performance GPU computation. It allows single execution of 1 pipeline on a GPU accelerated hardware device.

* please find the hardware requirements on the left side.

Only one AI Inference Server application instance (from the above list) can be installed on an edge device.
It means you can install AI Inference Server or AI Inference Server - 3 pipelines or AI Inference Server GPU accelerated on a single edge device. 
You cannot install e.g. AI Inference Server and AI Inference Server GPU accelerated applications on a single device.

Product
Quantity
Price
AI Inference Server - 1 pipeline

AI Inference Server - 3 pipelines

AI Inference Server GPU accelerated