Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2

This is a brief overview of folder structures in enaio redline 4.0. We also describe how to migrate existing 3.3 structures to 4.0 structures.

Introduction

Motivation

The way objects are structured was changed to improve performance and flexibility. In the previous version, saving objects to multiple locations with different access rights could slow down performance significantly. In addition, new objects had to be created whenever the standard structure had to be represented differently for different subject areas. The new folder structures in version 4.4, based on structure elements that represent search conditions, have many more advantages.

The following presentation explains how folder structures are set up and used (German only):

Using Subfolders in Version 3.3 vs. Configuring Folder Structures in 4.0

In version 3.3, static structures were created by adding (sub)folder object types to a folder or context folder. In version 4.0, you configure structure elements to create new folder structures. Structure elements represent a search for a document type in the object index data. Subfolder object types are no longer necessary.

The old context folders are kept, and are now simply called folders. Folders regulate such things as access rights and storage rules for objects inside the folder. The index data of each document object defines the folder structure. The data you enter determines where in the folder structure the object will be located.

How Folders are Displayed in the Client

The new folder structures look very similar to what you saw in previous version of enaio redline. However, you can now group objects in many more ways. You can also create sums, such as the value of invoices by month, or vacation leave taken by year. For more information, refer to the Structure Service API (Folder Structure).

Changes to the Schema

In version 4.0, there is still only one context folder per object type group, and there is no need to create additional folder object types.

New folder structures are configured in the context folder, in the 'Folder structure' view area. For more information about configuration possibilities, refer to the Structure Service API (Folder Structure).

The new enaio structure-service provides the client with a folder structure. For more information, refer to the online Help or the online developer documentation.

Changes to the Filing Locations

[krc?]An object can now only be filed to a single location. Enaio redline client 3.3 already took this into consideration. Now, with version 4.4, it is not even possible to use the API to file an object to another location.

[krc?]To make objects visible in another location, you can either use multi-value fields that add to the creation of a structure, through reference fields or by creating link documents.

File Components

With the static structures in version 3.3, it was possible to nest file components. In version 4.0, reference fields are used instead. References can be seen and accessed under the "References" tab in a context folder aspect. In future versions of redline 4.x, it will be possible to see the referenced objects in the tree structure itself.

Migration

The following sections describe how to convert structures.

There are template scripts available that support the migration. You must modify these based on the results of your analysis of existing source structures and the desired target structures. The tools use new REST endpoints for batch operations, for a high-performance processing of the migration.

Migration Steps

To execute the migration, complete the following steps:

  • Step 0: Update to the latest version 3.3 Release Candidate

If the last release of enaio version 3.3 (2018-04-25) has not been installed, you must do this first. As always, check all release digests to make sure you don't skip any necessary steps.

  • Step 1. Analyze the scenarios and create a migration plan

Since subfolders and filing locations will be replaced with index data fields, you must plan which index data fields must be modified for existing objects. You must also decide what information they should contain, to replace the information in the current filing location and parent object.

For example, if previously you had a set number of subfolders that were always available, such as documents, photos, and emails, you could replace these with a new catalog field "Document Type". The catalog field could contain values such as "Document", "Photo" and "Email". You then configure the structure service in this field as a structure element. The three catalog values are then available in the client as "virtual" folders.


  • Step 2: Modify schema

Based on the plan you created in the previous step, you must modify the data structures. Note the following:

    • You must extend the document types by the relevant index data fields.
    • You may need to create corresponding catalogs.
    • The filing location relations for all document types must allow the creation of instances of the document type in the context folder.
    • You may need to extend the document type forms with the new fields.
    • You may need to modify scripts accordingly.
    • You must create an appropriate structure element definition for each context folder. For more information, refer to Folder Structure API.

      Important Note

      If there are regular folder types in the schema that you would like to be context folders in version 4.0, and instances of them already exist, then the information stored in the elasticsearch database about all instances that are a child object of these folder-instances have to be extended by the context folder information. This is done by reindexing the corresponding objects. If new fields are added to the objects and they get enriched during the migration process using the Transformation API, this reindexing happens automatically. If not, you must start the reindexing manually, using the "Subsequent full text indexing" operation in enterprise-manager.
  • Step 3: Update to 4.0

After all the object types have been modified and the schema activated on the server, you can execute the update to version 4.0.
After completing the update, also follow the steps in Additional Manual Steps for Updating to enaio redline 4.0.

Note

  • After the update to version 4.0, the previous 3.x client is no longer available. The new 4.0 client can not display any parent-child relationships except for the new virtual ones. These are only visible after executing the migration Step 4, below.
    If the system you are updating has more than a million DMS objects and you can not have a longer system downtime, then you should execute the first transformation step from Step 4 before updating to 4.0 while the system is running. As an alternative, but only after confirming with Optimal Systems, the 3.x client can be run in version 4.0 during the transition period.
  • You can still configure existing subfolders in enaio redline designer 4.0, but it will not be possible to create new subfolders.


  • Step 4: Executing the Migration

Note

We recommend using a test system to validate the migration.

Using the Transformation API endpoints described below, you can add the values of the parent object or fixed values to the new fields you created in step 2. Do this separately for all new fields for each document type.

You can also use the Transformation API to move and delete objects in bulk. You can move objects from their current filing locations directly to a context folder, or delete empty subfolders.

Since you may need to call the Transformation API frequently, we recommend creating scripts for executing the migration. Templates are available. For more information, refer to the section "Using the Transformation API and Creating Scripts", below.

Important Note

There currently is a session timeout of 30 minutes. Any (Transformation) API call running longer than that are aborted. To make Transformation API calls that apply to many objects, with a runtime of more than 30 minutes, raise this timeout to a suitable value before making these calls. To do so, proceed as follows:

  1. Stop the dms-service.
  2. Open the folder <dms-service>\standalone\deployments.
  3. Open the file rest-ws.war with a zip-program like 7-zip and open the "WEB-INF" folder inside the zip-file.
  4. Extract the web.xml file to a temporary location and edit it with a text-/xml-editor.
  5. At the very bottom of the file you will find the following lines:
    <session-config>
    <!-- Note: This setting defines the JWT Token expire time -->
    <session-timeout>30</session-timeout>
    </session-config>

  6. The value "30" is the current session-timeout in minutes. Raise it to a value fitting your needs.
  7. Save the file and place it back in the rest-ws.war archive, overwriting the existing file.
  8. Start the dms-service again.The session-timeout is now raised to your chosen value.

(After the migration is finished, don't forget to lower this value again.)


Individual Transformation Steps Using the API

  1. Using the Transformation API and scripts, add the data from the subfolders and filing locations to the document objects.
  2. Move the document objects to their appropriate context folders.
  3. Delete the now-empty subfolders.

Converting the Business Logic

After migrating the object structures, you may need to update the business logic in your scripts. Any areas that deal with deleted subfolders are affected. For example, you might remove the automatic creation of subfolders after context folders have been created.

Artifacts that may need to be cleaned up by the user

After the subfolders have been deleted, favorites may still be available. The user can remove these manually.

  • Step 5: Modifying the schema

Remove all folder types in the schema that are not being used as context folders.You may also need to modify the rights system regarding roles and clauses.

Using the Transformation API and Creating Scripts

In this section, we describe the Transformation API as well as how to create transformation scripts.

      • Transformation API

DmsBatchService.updateObjects

Provides batch updates of objects by query. You can modify a list of objects by providing an ID list or by defining a search query. Warning: This operation may modify a large list of objects.(krc)

The following example updates all objects with the given IDs of the type 'mydocument' and sets the index data as given by the data object in JSON.

{ "query": { "expression" : ["ID1","ID2","ID3"], "type" : "mydocument" }, "data": { "stringstatefield": { "value" : "newvalue" }, "numbervaluefield": { "value" : 42 } } }

Query object

The query expression supports a list of object IDs or a simple query using the same syntax as ResultService.getQueryResult. The type property must be set for the simple query. It is used as type for the ID list. For the ID list, you can set the type to sysobject.

Data object

Using the data object, you can describe one or more changes in the custom index data field. The fields are identified by the technical name of the field. For example, you would use this data object to change the value of the field 'name' to the fixed value 'Max'.

{ "data": { "name": { "value" : "Max" }, } }

The data object supports the following properties.

PropertyDescription
valueFixed new value for the field. The value is expected in the same syntax as for any DMS object. See DmsService
parentrefTechnical qualified name of a parent object field. The value of the parent object is copied to the index data field of the object. If the parent of this object cannot be found, (krc and? or?) if the object is not a child of this parent type, the value is used as a fallback. If the value was not set, an error is reported.
onlyifemptyIf set to true, the index data field is only updated to a new value if the value is currently not set. A value is not set, if the current field value is null or a empty string. The default is false.

Move operation

You can also move the object, using moveto to provide a move target.

The following JSON code example moves any object with a given ID to the location given by 'PARENTID'.

{ "query": { "expression" : ["ID1","ID2","ID3"], "type" : "mydocument" }, "moveto" : "PARENTID" }

Other possible move targets are the fixed root target and the context target. The context move target moves the object as a direct child of the context folder. 

Options

The following properties can be used to configure the behavior of the update operation.

PropertyDefault valueDescription
ignoremissingfalseIf an ID list expression is used, and this list contains IDs for objects that cannot be found, this is handled as a failure. If the option is set to true, only a warning is reported.
breakonerrorfalseIf set to true, the operation is canceled at the first failure that occurs.

Comprehensive example

The following example shows the use of a parent reference and a query that uses a parent property.

{ "query": { "expression" : "personalfile.typefield=employee;statefield=active"], "type" : "document" }, "data": { "statefield": { "value" : "inactive" }, 
"stringfield": { "parentref" : "personalfile.anotherstringfield" } }, "moveto":"context", "options": { "breakonerror" : true } }

Using this update json will lead to this result:

Every document that has the current statefield value 'active' inside a context folder with the typefield value 'employee' will get the state 'inactive' and the value of 'stringfield' will be the same as the value of 'anotherstringfield' on the context folder. If the object is located insider a subfolder, it will also be moved to the context folder. If the operation runs in an error, it will be canceled immediately.

Transactions handling and return code

Each object update is run in one transaction. If an object fails to update, this is reported as a failure in the result. Any other objects marked as successful are not rolled back. If the operation is completely successful, 200 (OK) is returned. If the operation fails completely, for example if the input JSON cannot be parsed, an error code such as bad request (400) is returned. If a part of the operation fails, a conflict code (409) is returned with a list of the objects that failed.


Returns
Returns an update summary as a report. Set details to true to get a report of the updated objects.

Endpoint method overview

PropertyValue
Qualified nameDmsBatchService.updateObjects
Full path/rest-ws/service/dms/batch
Success code200: OK
Failure codes400: The input json can not be parsed. See cause message for more details. [DMS_METADATA_JSON_INPUT_PARSING_ERROR]
HTTP methodPUT
Types consumedapplication/json; charset=utf-8
Types producedapplication/xml; charset=utf-8,application/json; charset=utf-8
Required privilegeNo special privileges are required to invoke this endpoint,

Endpoint request parameter

NameCommentTypeInput
detailsIf set to true, a detailed success report of the updated objects is returned.QUERYtrue false
inThe update input (JSON) as body content.CONTEXTjson-body

DmsBatchService.deleteObjects 

URIhttp://localhost:8080/rest-ws/service/dms/batch/delete

Batch deletion of objects by query.

Example for delete by id:

{ "query": { "expression" : ["ID1","ID2","ID3"] } }

The accepted json is the same as used in the update endpoint. But moveto and data properties will be ignored. Especially the query object is described in the update operation. See DmsBatchService.updateObjects.

Options

The following properties can be used to configure the behavior of the delete operation.

PropertyDefault valueComment
harddeletefalseIf this property is set to true, the objects are deleted permanently.
deleteonlyemptyfolderstrueIf set to false, even folders that still contain child objects are deleted or recycled. The deletion is recursive (child objects are also deleted).

For other possible options see DmsBatchService.updateObjects Note:

This endpoint is using POST, because some HTTP frameworks forbid a DELETE request with body content.

Transactions handling and return code

Each object update is run in one transaction. If a object fails to be deleted, this fail is reported as failure in the result. The other objects, that are marked as success will not be rolled back. If the operation was completely successful a ok (200) will be returned. If the operation has completely failed, if for example the input json can not be parsed, a error code like bad request (400) is returned. If the operation as partly failed a conflict (409) will be returned. In this case, the return will also contain a list of objects that have failed.


Returns
Returns a deletion summary as a report. Set details to true to get a detailed report of the deleted objects.

Endpoint method overview

PropertyValue
Qualified nameDmsBatchService.deleteObjects
Full path/rest-ws/service/dms/batch/delete
Success code200: OK
Failure codes400: The input json can not be parsed. See cause message for more details. [DMS_METADATA_JSON_INPUT_PARSING_ERROR]
HTTP methodPOST
Types consumedapplication/json; charset=utf-8
Types producedapplication/xml; charset=utf-8,application/json; charset=utf-8
Required rightNo special privileges are required to invoke this endpoint,

Endpoint request parameter

NameCommentTypeInput
detailsIf set to true, a detailed success report of the deleted objects is returned.QUERYtrue false
inThe update input (JSON) as body content.CONTEXTjson-body
      • Creating Scripts

Here we show two node.js scripts, "batchrunner.js" and "index.js". 
The batchrunner.js script is used to build the connection and run the API calls. You only need to adapt the connection parameters (host, auth) in this file.

batchrunner.js
const request = require('request');
const util = require('util');

var host = 'http://10.10.1.512:8080';
var auth = {
  'user': 'root',
  'pass': 'password',
  'sendImmediately': true
}

module.exports.executeUpdate = ((query) => {

  var options = {
    url : host + '/rest-ws/service/dms/batch',
    method: 'PUT',
    headers : {
      'accept' : 'application/json; charset=UTF-8'
    },
    auth : auth,
    json : query
  }
  console.log('Batch update service url  : '+options.url);
  console.log('Batch update service query:\n'+util.inspect(options.json, false, null));

  let resultResolve, resultReject;
  let result = new Promise((resolve, reject) =>{
    resultResolve = resolve;
    resultReject = reject;
  });

  request(options, (err,resp, body) => {
    if( err ) {
      resultReject();
      throw err;
    }
    resultResolve();
    console.log(util.inspect(body, false, null));
  });

  return result;
});

module.exports.executeDelete = ((query) => {
  var options = {
    url : host + '/rest-ws/service/dms/batch/delete',
    method: 'POST',
    headers : {
      'accept' : 'application/json; charset=UTF-8'
    },
    auth : auth,
    json : query
  }
  console.log('Batch delete service url  : '+options.url);
  console.log('Batch delete service query:\n'+util.inspect(options.json, false, null));

  let resultResolve, resultReject;
  let result = new Promise((resolve, reject) =>{
    resultResolve = resolve;
    resultReject = reject;
  });

  request(options, (err,resp, body) => {
    if( err ) {
      resultReject();
      throw err;
    }
    resultResolve();
    console.log(util.inspect(body, false, null));
  });

  return result;
});


The index.js script contains two sample calls of the endpoints described above. Adapt them and add new calls as needed for your migration.

index.js
const batch = require("./batchrunner.js");

async function runTransformation()
{
  await batch.executeUpdate(
    {
      query: {
        expression: '',
        type: 'lsydokument'
      },
      usercomment: 'Move',
      data: {
        "albumname": {
          "parentref": "album.name",
          "value": "Horst"
        }
      },
      moveto: 'context',
      options: {
        breakonerror: true
      }
    }
  );

  await batch.executeDelete(
    {
      query: {
        expression: '',
        type: 'album'
      },
      usercomment: 'Delete',
      options: {
        breakonerror: true,
        deleteonlyemptyfolders: true,
        harddelete: true
      }
    }
  )
}

runTransformation();

      • Running the Scripts

The above scripts can be run in the command-line by entering "node index.js", provided that node.js is installed, and the request framework has been installed locally with 'npm install request'.

Alternatively you can use any other REST-WS-capable client, such as Postman, cURL or a Java program using Apache HttpComponents. An example of the latter can be seen with a dms-service installation (3.x or 4.0) at http://localhost:8080/rest-ws/#PAGE:examples/java.



  • No labels