Page Properties | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||
Resources & Remarks Modification History
|
Excerpt |
---|
A set of APIs and tools that enables customers to set up and use AI functionalities by themselves. |
...
Section | ||||||
---|---|---|---|---|---|---|
| ||||||
| ||||||
Note | ||||||
| ||||||
Platform
...
Platform Concept
The Artificial Intelligence Platform is a set of APIs and tools that enable customers to set up and use AI functionalities by themselves, yet with low costs and high performance.
...
Every part of the AI Platform can be deployed on-premises or in the cloud.
Platform Components
There are two big parts of the Artificial Intelligence Platform, the Training part and the Serving part, both with their own components.
Model Training
The Model Training part of the platform is responsible for storing exported data from yuuvis® Momentum (documents, metadata), preprocessing exported data, machine learning training, model evaluation, etc.
...
- Exported data repository – a physical place where exported data are stored (local storage, S3, Azure Blob Storage, etc.)
- ML Training Pipeline – – the component responsible for the preprocessing of data, machine learning AI model training, and model evaluation, etc. Can be deployed on-premises or in the cloud..
Model Serving
The Model Serving part of the platform is responsible for serving predictions to the calling application (yuuvis® Momentum client application, 3rd party applications, scanners, etc.).
There are two main components:
- Model Serving Server – – infrastructure that hosts machine learning models that make predictionsruns dockerized AI models that perform classification or metadata-extraction from documents. Can be deployed on-premises or in the cloud.
- PREDICT-API Service – – responsible for calling appropriate machine learning models, aggregating, improving, and validating results, and finally rendering responses to the calling application.
Platform Managing
The whole platform is managed using Kairos CLI AppKAIROS platform is configured using KAIROS-API service to manage the Inference Schema. This allows system integrators to configure use of classifiers and metadata extractors for the whole system or for each of the tenants.
Infrastructure
All Serving and platform management components are dockerized and can be deployed to any infrastructure that supports Docker/Kubernetes, on-premises, or in the cloud.
The Model Training and Model Serving parts are independent of each other and can be deployed to different infrastructures.Those who is installed directly into a host operating system. This will be changed in future versions, and training will be dockerized and fully Kubernetes compatible as well.
To support the customers that have sensitive data and do not want their documents to leave their premises can do on-premises systems, the the model training can be executed in their own data centerinfrastructure and once the models are dockerized, they can be used on-premises or in the cloud, according to customers' needs.
Flow Description
Since this is an end-to-end platform, the flow will be best described using the most common use case: metadata extraction from invoices.
The whole process can be done in just 11 8 steps.
- Export documents and their metadata in a predefined format (e.g., 5,000 invoices received from different partners/suppliers).
- Check exported documents and their metadata (remove those without metadata or with wrong metadata).
- Select a suitable algorithm for training and set hyperparameters (there are one or more different algorithms for every class of problems).
- Run the machine learning training. Decide which metadata shall be extracted and train the corresponding ML Training Pipelines.
- Evaluate the model performance.
- Repeat steps 1 to 5 or 3 to 5 until you are satisfied with the results.
- Deploy the model to the serving infrastructure.
- Define inference schema Dockerize and deploy the models.
- Define Inference Schema (set usage of the model for the appropriate document type).
- Call predict endpoints of the Use PREDICT-API .
- Show results to the user.
- Collect and save feedback – users' choices should be saved for further improvement of the modelsService to extract the metadata from new invoices in your client application.
Info | ||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||||||||||||||||||||
Read on
|
...