Installation of the REPOSITORYMANAGER Service
Guideline for the installation of the REPOSITORYMANAGER service.
Table of Contents
Introduction
This article describes the necessary steps for the installation of the REPOSITORYMANAGER service. In addition to the docker image deployment, the Kubernetes cluster and a cluster firewall need to be configured accordingly. Furthermore, you can operate multiple instances of the REPOSITORYMANAGER service.
After the successful installation, further configuration is necessary in order to connect SAP and yuuvis® Momentum.
>> Configurations for the REPOSITORYMANAGER Service
Deployment
The service is delivered as docker container image.
>>Version Tags Services
As of 2022 Summer, the installation via Helm chart is possible as well.
>> Installation Guide
For the deployment to the yuuvis® Momentum cluster, you need a deployment and a service script as shown in the example code blocks below. The parameters have to be adjusted according to your own cluster. However, please use the /working-dir
path for the PersistentVolumeClaim
.
apiVersion: v1 kind: Service metadata: namespace: $NAMESPACE labels: app: yuuvis name: repositorymanager yuuvis: "true" name: repositorymanager spec: ports: - name: "http" port: 80 targetPort: 8010 type: ClusterIP selector: name: repositorymanager
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-repositorymanager namespace: $NAMESPACE spec: storageClassName: local-path volumeMode: Filesystem accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- apiVersion: apps/v1 kind: Deployment metadata: namespace: $NAMESPACE labels: app: yuuvis name: repositorymanager name: repositorymanager spec: replicas: 1 selector: matchLabels: name: repositorymanager template: metadata: labels: name: repositorymanager spec: containers: - name: repositorymanager image: docker.optimal-systems.org/team-kookaburra/$CI_PROJECT_NAME:commit-$CI_COMMIT_SHORT_SHA imagePullPolicy: Always env: - name: JAVA_OPTS value: -Xmx128m - name: SPRING_CLOUD_CONFIG_URI value: "http://configservice/config" - name: SPRING_PROFILES_ACTIVE value: prod,docker,kubernetes ports: - containerPort: 8010 volumeMounts: - name: storage mountPath: /working-dir volumes: - name: storage persistentVolumeClaim: claimName: data-repositorymanager restartPolicy: Always imagePullSecrets: - name: osgitlab
Configuring the Kubernetes Cluster
The REPOSITORY-MANAGER service has to be accessible for the SAP system which is running outside the yuuvis® Momentum Kubernetes cluster. We recommend the usage of a loadbalancer of your cloud provider or the implementation of an Ingress controller. Alternatively, you could open the corresponding node port for the connection to the SAP system.
The following two sections provide an example configuration for both, the access via Ingress and via Node Port.
Note: If your tests fail due to problems with the ILM protocol, please disable CORS
- by configuring nginx.ingress.kubernetes.io/enable-cors: "false" in case you use Ingress controller or
- in the loadbalancer of your cloud provider.
Access via NGINX Ingress Controller
The following steps result in a configuration where the REPOSITORY-MANAGER service is accessible via NGINX Ingress controller from outside the Kubernetes cluster.
Add the Helm repositories for the NGINX Ingress controller and for a certificate manager for automated TLS certificate management:
helm repo add nginx-stable https://helm.nginx.com/stable helm repo add jetstack https://charts.jetstack.io helm repo update
Install the certificate manager and the Ingress:
helm install \ cert-manager jetstack/cert-manager \ --namespace cert-manager \ --create-namespace \ --version v1.8.0 \ --set installCRDs=true helm install nginx-ingress nginx-stable/nginx-ingress --set rbac.create=true # Validate that nginx is running kubectl get pods --all-namespaces -l app=nginx-ingress-nginx-ingress
Add the following
ingress-repositorymanager.yaml
file to the templates folder of therepositorymanager
Helm chart. Adjust the values according to your installation.Example 'ingress-repositorymanager.yaml' configurationapiVersion: cert-manager.io/v1 kind: Certificate metadata: name: repositorymanager.yourdns.net # Change value spec: secretName: repositorymanager-yourdns-net-cert # Change value dnsNames: - repositorymanager.yourdns.net # Change value issuerRef: group: cert-manager.io name: letsencrypt-prod kind: ClusterIssuer --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: cert-manager.io/cluster-issuer: letsencrypt-prod ingress.kubernetes.io/ssl-redirect: "true" meta.helm.sh/release-name: repositorymanager meta.helm.sh/release-namespace: repositorymanagerwinter # Change value nginx.ingress.kubernetes.io/enable-cors: "false" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/rewrite-target: / name: repositorymanager-ingress spec: rules: - host: repositorymanager.yourdns.net # Change value http: paths: - backend: service: name: repositorymanager port: number: 80 path: / pathType: ImplementationSpecific tls: - hosts: - repositorymanager.yourdns.net # Change value secretName: repositorymanager-yourdns-net-cert # Change value
If you want to operate multiple instances of REPOSITORY-MANAGER service, they have to run in separate namespaces each of them with an own Ingress controller.
Access via Node Port
The following steps result in a configuration where the REPOSITORY-MANAGER service is exposed via Node Port to be accessible from outside the Kubernetes cluster.
Expose the REPOSITORYMANAGER service via a Kubernetes Node port to the local network. In the example configuration shown in the code block below, the REPOSITORYMANAGER service will be accessible in the local network by the IP address CLUSTER_IP:30036
.
kind: Service apiVersion: v1 metadata: name: repositorymanager namespace: yuuvis app: yuuvis name: repositorymanager yuuvis: 'true' spec: ports: - name: http protocol: TCP port: 80 targetPort: 8010 nodePort: 30036 # This should be unique value in range 30000-32767 selector: name: repositorymanager type: NodePort
Configuring the Cluster Firewall
The REPOSITORYMANAGER service, more precisely the barcode functionality, needs access directly from the pod where it is connected to the SAP system. For this reason, a firewall entry needs to be added on cluster level to allow communication to the SAP system.
Provide the IP:port info of the SAP system that will have an RFC connection for the barcode functionality to work properly to the cluster administrator. The configurations have to be carried out on the KGS Administration Page that can be accessed via the following address: http://<host>/repositorymanager/cs/
The default port is 8010, the user name is admin
, and the password is admin
.
After logging in, you can change the login credentials via OSGi > Configuration > Application Framework Management Console:
Adjusting the Service Configuration
In case of a standard installation, the service configuration parameters are specified in the corresponding application.yml
file and are set to reasonable defaults. Those default values can be overwritten by specifying different values in the repositorymanager-prod.yml
configuration file. Especially, the parameters configuring the ActiveMQ connection have to be adjusted:
spring.activemq.broker-url
spring.activemq.user
spring.activemq.password
Note: KGS/CS-Admins should limit the free access to SAP systems to ensure that only relevant SAP systems can store data for a certain tenant.
Parameters of the repositorymanager-prod.yml
configuration file:
Property | Type | Description | Example value | Default value |
---|---|---|---|---|
repository-manager.barcode.default-docType | String | Default SAP document type that is used if there is no barcode mapping for the yuuvis® Momentum content type (see | TIF | TIF |
repository-manager.barcode.cntType2docType | String | List of entries for mapping the barcode document type, separated by pipe characters. Entries consist of: yuuvis® Momentum content type, equals sign, SAP document type. For each yuuvis® Momentum content type missing in the mapping list, the SAP document type | Image/TIFF=FAX|application/pdf=PDF | Image/TIFF=FAX|application/pdf=PDF |
core.api.url | String | Public URL of the yuuvis® Momentum authentication service. | http://<yourserver>:<port> | http://127.0.0.1:7301/ |
core.api.username | String | Username of the technical user for the REPOSITORYMANAGER service's access to yuuvis® Momentum. Note: The technical user requires full access rights to the objects. | sap | root |
core.api.password | String | Password of the technical user for the REPOSITORYMANAGER service's access to yuuvis® Momentum. | optimal1 | optimal |
core.api.tenant | String | Tenant of the technical user for the REPOSITORYMANAGER service's access to yuuvis® Momentum. | default | default |
spring.activemq.broker-url | String | IP address and port used by ActiveMQ. |
| tcp://127.0.0.1:61616 |
spring.activemq.user | String | User name for ActiveMQ access. | admin | admin |
spring.activemq.password | String | Password for ActiveMQ access. | admin | admin |
The following code block shows an example configuration.
repository-manager: barcode: cntType2docType: Image/TIFF=FAX|application/pdf=PDF default-docType: TIF core: api: url: https://client.con.yuuvis.org username: root password: optimal tenant: default spring: activemq: broker-url: tcp://repositorymanager-mq:61616 user: admin password: admin
Multiple Instances of the REPOSITORYMANAGER Service
If you use an Ingress controller, just create additional instances in separate namespaces with an own Ingress controller for each of them.
The following example deployment process is intended to explain the usage of multiple REPOSITORYMANAGER service instances exposed via Node port.
To achieve multi-tenancy, an independent instance of the REPOSITORYMANAGER service needs to be deployed for each individual tenant. The same service artifact can be used. In general, the following principles apply:
- Each instance should have its own ActiveMQ pod, distributed as separate image.
- Each pair of REPOSITORYMANAGER service and ActiveMQ instances should be deployed into its own namespace, have its own ports and profile configuration (default is
prod
). - Each pair of REPOSITORYMANAGER service and ActiveMQ must have its own tenant.
Namespaces, service ports and profiles should be specified in the deployment script. The following sections describe the required configuration steps. All scripts are applied via the command:
kubectl apply - f <filename>
Preparation
Decide on the namespace, and node ports to be used (one port for the REPOSITORYMANAGER service and two ports for the repositorymanager-mq
service) as well as the profile in which the application will run (this determines the naming of the configuration file). For this example, the namespace will be repositorymanager-1
, the ports are 30000
for the REPOSITORYMANAGER service, 30001
and 30002
for the ActiveMQ and the profile will be instance1
. The cluster should use the repositorymanager
app schema. The tenant to be used by the REPOSITORYMANAGER service should be created and configured to use it as described above for the configuration of a single instance.
Namespace
Create the namespace using the following YML script:
apiVersion: v1 kind: Namespace metadata: name: repositorymanager-1 # This is an example value that has to be replaced by the name of your namespace you want to use for the additional repositorymanager service instance.
ActiveMQ service
Deploy the repositorymanager-mq
pod for ActiveMQ using the following two YML scripts:
apiVersion: apps/v1 kind: Deployment metadata: namespace: repositorymanager-1 # Change value to the namespace specified in the namespace script before. name: repositorymanager-mq labels: app: yuuvis name: repositorymanager-mq spec: replicas: 1 selector: matchLabels: name: repositorymanager-mq template: metadata: labels: name: repositorymanager-mq spec: containers: - image: docker.yuuvis.org/<image> # Change value name: repositorymanager-mq imagePullPolicy: Always restartPolicy: Always imagePullSecrets: - name: changeme # Change value
Note: This is an example script which requires the a specific secret to be present in the same namespace. Different clusters might require some changes.
apiVersion: v1 kind: Service metadata: namespace: repositorymanager-1 name: repositorymanager-mq labels: app: yuuvis name: repositorymanager-mq spec: selector: name: repositorymanager-mq ports: - name: dashboard port: 8161 nodePort: 30001 - name: openwire port: 61616 nodePort: 30002 type: NodePort
Note: The ActiveMQ service exposes two ports, one to access the web admin page, internally on port 8161
and externally on 30001
, the other to access the ActiveMQ itself, internally on port 61616
and externally on 30002
.
Repositorymanager service
Deploy the repositorymanager service using the following two YML scripts:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-repositorymanager namespace: repositorymanager-1 spec: storageClassName: local-path volumeMode: Filesystem accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: apps/v1 kind: Deployment metadata: namespace: repositorymanager-1 labels: app: yuuvis name: repositorymanager name: repositorymanager spec: replicas: 1 selector: matchLabels: name: repositorymanager template: metadata: labels: name: repositorymanager spec: containers: - name: repositorymanager image: docker.optimal-systems.org/team-kookaburra/repositorymanager-momentum:commit-2d346b0e imagePullPolicy: Always env: - name: JAVA_OPTS value: -Xmx128m - name: SPRING_CLOUD_CONFIG_URI value: "http://configservice.yuuvis/config" - name: SPRING_PROFILES_ACTIVE value: instance1,docker,kubernetes ports: - containerPort: 8010 volumeMounts: - name: storage mountPath: /working-dir volumes: - name: storage persistentVolumeClaim: claimName: data-repositorymanager restartPolicy: Always imagePullSecrets: - name: osgitlab
Note: This script uses the image from OS GitLab which requires the osgitlab secret to be present in the same namespace. Different clusters might require some changes. Additionally, the environment parameter SPRING_CLOUD_CONFIG_URI
should point to the CONFIGSERVICE of the specific cluster. The SPRING_PROFILES_ACTIVE
environment variable should contain the docker and kubernetes profiles as well as the service profile dedicated to that instance, in this case instance1
.
apiVersion: v1 kind: Service metadata: namespace: repositorymanager-1 labels: app: yuuvis name: repositorymanager yuuvis: "true" name: repositorymanager spec: ports: - name: "http" port: 80 targetPort: 8010 nodePort: 30000 type: NodePort selector: name: repositorymanager
Note: The REPOSITORYMANAGER service exposes one port to allow internal access via port 8010 and external access via port 30000.
Configuration
Use the CONFIGSERVICE of the cluster to create the repositorymanager-instance1.yml
file. This file contains the configuration for the service instance running with the profile instance1
.
The following code block shows an example for the configuration:
core: api: url: http://client.yuuvis username: root password: optimal tenant: instance1tenant spring: activemq: broker-url: tcp://repositorymanager-mq:61616
The tenant name as well as credentials to access the yuuvis® Momentum system are provided as well as the URLs to access the system and ActiveMQ.
Once the configuration is created, the REPOSITORYMANAGER service should be restarted to apply the changes.
Access
Once the service is deployed and configured, a reverse proxy should be created to allow two-way communication between the REPOSITORYMANAGER service and SAP. This will also allow access to the KGS admin panel for service configuration.
Summary
If you are done with the installation process described in this guideline, the next step will be the proper configuration of yuuvis® Momentum and the SAP system.