Connector

The Connector is the component that index data into Onify. It indexes things like objects, users, events, options, etc. The connector supports multiple data sources like SQL, LDAP, REST, etc. Pipelines are created to streamline the indexing process using a chain of tasks that can be reused.

For some example connector pipelines, check out our example GitHub repo.

Install and setup Connector

In order to setup the connector you need to do the following steps:

  1. Download the latest Onify Agent (via https://support.onify.co/changelog)
  2. Install and register Onify Agent (see https://support.onify.co/docs/install#onify-agent)
  3. Make sure the connector scripts are in the ./scripts/connector folder of the Onify Agent folder
  4. Add the tag connector to the Onify Agent that will be running the connector scripts (via <onify-app-url>/admin/configuration/agents)

Configure Connector

Data sources

The connector supports the following data sources.

REST API

  • name - Name of the data source
  • description - Short description that describes the data source
  • host - Specify hostname, eg. api.service.com
  • scheme - Use http or https for host
  • timeout - Timeout for web request
  • method - Http method for data source (get, post or put)
  • pagination.paginationtype - Set pagination type for response, if any (nexturl or offset)
  • pagination.nexturlfield - The next page url field (if pagination is set to nexturl)
  • pagination.offset - The offset field (if pagination is set to offset)
  • pagination.totalfield -The offset total results field (if pagination is set to offset)
  • headers - Add http headers (key and name)
  • body - JSON request body
  • recordspath - Set if records are located in specific variable/branch
  • tag - Tag(s) for the data source (can be used for exporting configuration)

SQL

Supports Microsoft SQL and MySQL.

  • name - Name of the data source
  • description - Short description that describes the data source
  • host - Specify hostname, eg. dbserver1
  • sqltype - Select type of SQL server to connect to (mysql or mssql)
  • database - Name of the database
  • port - Custom port (if any)
  • timeout - Timeout for SQL query
  • encrypt - Encrypt communications (required for Azure database)
  • username - SQL username (required for mssql but optional for mssql)
  • password - SQL user password (required for mssql but optional for mssql)
  • tag - Tag(s) for the data source (can be used for exporting configuration)

LDAP

  • name - Name of the data source
  • description - Short description that describes the data source
  • server - Specify hostname, eg. ldap.domain.local
  • port - LDAP port
  • timeout - Timeout for LDAP request
  • searchbase - The location in the directory where the search for a particular directory object begins, eg. dc=example,dc=com
  • searchscope - Indicates the set of entries at or below the BaseDN that may be considered potential matches (base - limits to just the base object, onelevel - limits to just the immediate children or subtree - search the entire subtree from the base object down)
  • username - LDAP user username
  • password - LDAP user password
  • tag - Tag(s) for the data source (can be used for exporting configuration)

FILE

Supports JSON or CSV files.

  • name - Name of the data source
  • description - Short description that describes the data source
  • filetype - Type of file to read (json or csv)
  • filename - Full path and filename
  • encoding - Encoding of the file (utf8, unicode or ascii)
  • tag - Tag(s) for the data source (can be used for exporting configuration)

Script

Create custom scripts (PowerShell) to read from any data source.
See Tasks > Script task section.

Tasks

These task types are supported in the connector (pipelines).

Reader task

Read (load) data from selected data source

  • name - Name of the task
  • description - Short description that describes the task
  • datasourcetype - Select data source type (file, sql, rest or ldap)
  • datasource - Select data source to read from
  • query - Query that matches the datasource, eg. SQL query for SQL data source or url for REST API data source
  • outvariable - Name of output variable to use in other tasks (eg. $V.("var1"))
  • loop - If this task should loop through a (array) variable
  • loopvariable - Name of the (array) variable to loop through (eg. var1)
  • loopsleep - How many milliseconds to sleep between loop requests
  • tag - Tag(s) for the task (can be used for exporting configuration)

Transform task

Transform data before indexing

  • name - Name of the task
  • description - Short description that describes the task
  • endpointtype - What type of endpoint (type) to use for indexing (item, user, option, event or role)
  • invariable - Input (array) variable to transform
  • mapping - JSON structure (item, user, option, event or role) to be transformed. Use #variable_name# to address variables inside the invariable variable. See example below.
  • outvariable - Name of transformed output variable
  • tag - Tag(s) for the task (can be used for exporting configuration)

Transform mapping example for (customer) item

{
	"key": "customer-#CustomerNumber#",
	"name" : "#Name#",
	"tag" : ["customer", "company"],
	"type" : "customer",
	"attribute" : { 
		"invoiceaddress": "#Address1#",
		"invoicezipcode": "#ZipCode#",
		"invoicecity": "#City#"
	}
}

Transform mapping example of multi level variable/object

Here is the object to map from:

{
    "key": "p1000229",
    "name": "P1000229 - Lenovo ThinkStation S20",
    "status": "In use",
    "attribute": {
        "serialnumber": "L3BB911",
        "model": "Lenovo ThinkStation S20",
        "costcenter": "Sales",
        "_sys_id": "33c1fa8837f3100044e0bfc8bcbe5def",
        "company": "ACME North America",
        "location": "650 Dennery Road #102, San Diego,CA",
        "vendor": "Lenovo",
        "department": "Sales",
    }
}

Here is the mapping configuration:

{
	"key": "#key#",
	"name": "#name#",
	"attribute": {
		"company": "#attribute.company#"
	}
}

And here is the result:

{
    "key": "p1000229",
    "name": "P1000229 - Lenovo ThinkStation S20",
    "attribute": {
        "company": "ACME North America"
    }
}

Index task

Index data (items. users, options, roles or events)

  • name - Name of the task
  • description - Short description that describes the task
  • endpointtype - What type of endpoint (type) to use for indexing (item, user, option, event or role)
  • indexmethod - Method to use when indexing objects (POST = Create new or replace objects, PUT = Create new or update objects or PATCH = Update existing objects)
  • invariable - Input (array) variable of objects to index
  • alwaysupdate - Check local cache (MD5 hash) if object needs to updated in index or not. Normally check if full index pipeline and uncheck if delta index pipeline.
  • tag - Tag(s) for the task (can be used for exporting configuration)

Script task

Supports custom scripts (PowerShell)

  • name - Name of the task
  • description - Short description that describes the task
  • scriptsource - Inline script or file based script
  • script - Type the script here if inline or type the full path for the script if file
  • tag - Tag(s) for the task (can be used for exporting configuration)

Purge task

Purge obsolete data from the index

  • name - Name of the task
  • description - Short description that describes the task
  • endpointtype - What type of endpoint (type) to use for indexing (item, user, option, event or role)
  • query - Enter query params based on endpointtype. See examples below.
  • tag - Tag(s) for the task (can be used for exporting configuration)

Purge ALL objects

query=*

Purge company items not modified in the last 2 hours

query=type%3Acompany%20AND%20modifieddate%3A%5B*%20TO%20now-2h%5D

Pipelines

Create new pipeline

Build pipelines based on one or more tasks.

  • name - Name of the pipeline
  • key - Unique key for the pipeline that will be used when running the pipeline
  • description - Short description that describes the pipeline
  • role - Users with this role have access to running the pipeline. If not set, everyone can run this pipeline
  • tag - Tag(s) for the pipeline (can be used for exporting configuration)
  • tasks.tasktype - What type of task to include
  • tasks.task - Select the task
  • tasks.order - In what order should the task be run (required and must be unique)
  • tasks.required - If the task is required and therefore should not generate and error

Example pipeline

  1. Reader task: Read customers from CRM system
  2. Transform task: Map customer fields
  3. Index task: Index customers from CRM
  4. Reader task: Read customers from ERP system
  5. Script task: Update special customer fields
  6. Index task: Update customers from ERP
  7. Purge task: Delete obsolete customers from index

Run Connector Pipeline

You can run/execute a pipeline via API via the /my/connector/pipelines/{key}/run endpoint or via command line, see examples below.

Run Connector for Onify API v2

.\runpipeline.ps1 -pipeline <pipeline-key> -api_url <api-url>/api/v2 -api_token <api-token>

Run Connector for Onify API v1

You need to specify locale for API v1.

.\runpipeline.ps1 -pipeline <pipeline-key> -api_url <api-url>/api/v1 -api_token <api-token> -locale <locale>