AI Inference Server

Provided by SIEMENS

Reduce the cost, effort and time to bring AI models to the shop floor and help to focus on your domain expertise

Do you see AI as a potential solution for your current challenges? 
But are you concerned about the high costs and effort to bring AI to the shop floor and maintain it?
Are you interested in standardizing the execution of AI Models on the shop floor and integrate it with monitoring, debugging, and remote deployment capabilities?
  
AI Inference Server is the edge application to standardize AI model execution on Siemens Industrial Edge. The application eases data ingestion, orchestrates data traffic, and is compatible all powerful AI frameworks thanks to the embedded Python interpreter. It enables the AI model deployment as content to a standard edge app instead of custom container deployment for single models. AI finally meets Siemens Industrial Edge and AI Inference Server leverages the benefits of both.

Entitlement
You can choose from several AI Inference Server variants which have different hardware requirements*

AI Inference Server - 1 pipeline
Allows single execution of one pipeline.

AI Inference Server - 3 pipelines
Allows the execution up to three pipelines at the same time.

AI Inference Server GPU accelerated
AI Inference Server GPU accelerated application is taking advantage of the high-performance GPU computation. It allows single execution of 1 pipeline on a GPU accelerated hardware device.

* please find the hardware requirements on the left side.

Only one AI Inference Server application instance (from the above list) can be installed on an edge device.
It means you can install AI Inference Server or AI Inference Server - 3 pipelines or AI Inference Server GPU accelerated on a single edge device. 
You cannot install e.g. AI Inference Server and AI Inference Server GPU accelerated applications on a single device.

Product
Quantity
Price
AI Inference Server

₹0.00
AI Inference Server - 3 pipelines

₹0.00
AI Inference Server GPU accelerated

₹0.00

Your trial subscription will be automatically renewed to the subscription term indicated below, unless you cancel at least 14 days prior to renewal date.
The selected products cannot be added to cart. Please review and modify product selection.

Total: ₹0.00

( ₹0.00 / year after trial period )
Disclaimer: Taxes may apply
You Save: 
Billing Term: 3-month trial, yearly subscription thereafter
Subscription Term: 3-month trial, yearly subscription thereafter
Access Code
Access Failed
You Purchased this Product before and cannot purchase it again
You previously purchased a trial license of one or more selected products. You may deselect these products by modifying quantity to 0.
Configuration should at least have one item, to add to cart
Supported payment types: Payment On Account

Thank you for your interest in the MindSphere application AI Inference Server . A member of the team will get in touch with you shortly about your request.

Best regards,

Your MindSphere team

Your request has been timed out. Please try again.

Added to cart

Quantity

Estimated total
Tax calculated at checkout

Key Features
  • Supports most popular AI frameworks compatible with Python
  • Orchestrates and controls AI model execution
  • Ability to run AI pipelines both with an older and a newer version of Python
  • Allows horizontal scaling of AI pipelines for optimal performance
  • Simplifies tasks such as input mapping (thanks to the integration with Databus and other Siemens Industrial Edge connectors), data ingestion, pipeline visualization
  • Monitoring and debugging of AI models using inference statistics, logging and image visualization
  • Contains pipeline version handling
  • Import models via UI or receive them remotely
  • Supports persistent data storage on the local device for each pipeline
Additional feature is available only in AI Inference Server - 3 pipelines variant
  • Supports multiple pipeline execution at the same time (up to 3 simultaneous pipelines)

Benefits
  • Standardize the AI model execution using an AI-ready inference with the Edge ecosystem
  • Standardize the logging, monitoring, and debugging of AI models 
  • Designed for MLOps Integration with AI Model Monitor
Additional benefits of AI Inference Server GPU accelerated variant : 
  • Standardize the AI model execution on a GPU accelerated hardware using an AI-ready inference with the Edge ecosystem

Additional Information

Thank you for your interest in our solutions for your business.

Siemens AG

DI FA CTR SVC&AI IAI

Siemenspromenade 1

91058 Erlangen, Germany

Copyright © Siemens AG, [2023], and licensors. All rights reserved.