Connector
The Connector is the component that index data into Onify. It indexes things like objects, users, events, options, etc. The connector supports multiple data sources like SQL, LDAP, REST, etc. Pipelines are created to streamline the indexing process using a chain of tasks that can be reused.
For some example connector pipelines, check out our example GitHub repo.
Install and setup Connector
In order to setup the connector you need to do the following steps:
- Download the latest Onify Agent (via https://support.onify.co/changelog)
- Install and register Onify Agent (see https://support.onify.co/docs/install#onify-agent)
- Make sure the connector scripts are in the
./scripts/connector
folder of the Onify Agent folder - Add the tag
connector
to the Onify Agent that will be running the connector scripts (via<onify-app-url>/admin/configuration/agents
)
Configure Connector
Data sources
The connector supports the following data sources.
REST API
name
- Name of the data sourcedescription
- Short description that describes the data sourcehost
- Specify hostname, eg.api.service.com
scheme
- Use http or https for hosttimeout
- Timeout for web requestmethod
- Http method for data source (get, post or put)pagination.paginationtype
- Set pagination type for response, if any (nexturl or offset)pagination.nexturlfield
- The next page url field (if pagination is set tonexturl
)pagination.offset
- The offset field (if pagination is set tooffset
)pagination.totalfield
-The offset total results field (if pagination is set tooffset
)headers
- Add http headers (key and name)body
- JSON request bodyrecordspath
- Set if records are located in specific variable/branchtag
- Tag(s) for the data source (can be used for exporting configuration)
SQL
Supports Microsoft SQL and MySQL.
name
- Name of the data sourcedescription
- Short description that describes the data sourcehost
- Specify hostname, eg.dbserver1
sqltype
- Select type of SQL server to connect to (mysql or mssql)database
- Name of the databaseport
- Custom port (if any)timeout
- Timeout for SQL queryencrypt
- Encrypt communications (required for Azure database)username
- SQL username (required for mssql but optional for mssql)password
- SQL user password (required for mssql but optional for mssql)tag
- Tag(s) for the data source (can be used for exporting configuration)
LDAP
name
- Name of the data sourcedescription
- Short description that describes the data sourceserver
- Specify hostname, eg.ldap.domain.local
port
- LDAP porttimeout
- Timeout for LDAP requestsearchbase
- The location in the directory where the search for a particular directory object begins, eg.dc=example,dc=com
searchscope
- Indicates the set of entries at or below the BaseDN that may be considered potential matches (base
- limits to just the base object,onelevel
- limits to just the immediate children orsubtree
- search the entire subtree from the base object down)username
- LDAP user usernamepassword
- LDAP user passwordtag
- Tag(s) for the data source (can be used for exporting configuration)
FILE
Supports JSON or CSV files.
name
- Name of the data sourcedescription
- Short description that describes the data sourcefiletype
- Type of file to read (json
orcsv
)filename
- Full path and filenameencoding
- Encoding of the file (utf8
,unicode
orascii
)tag
- Tag(s) for the data source (can be used for exporting configuration)
Script
Create custom scripts (PowerShell) to read from any data source.
See Tasks > Script task section.
Tasks
These task types are supported in the connector (pipelines).
Reader task
Read (load) data from selected data source
name
- Name of the taskdescription
- Short description that describes the taskdatasourcetype
- Select data source type (file, sql, rest or ldap)datasource
- Select data source to read fromquery
- Query that matches the datasource, eg. SQL query for SQL data source or url for REST API data sourceoutvariable
- Name of output variable to use in other tasks (eg.$V.("var1")
)loop
- If this task should loop through a (array) variableloopvariable
- Name of the (array) variable to loop through (eg.var1
)loopsleep
- How many milliseconds to sleep between loop requeststag
- Tag(s) for the task (can be used for exporting configuration)
Transform task
Transform data before indexing
name
- Name of the taskdescription
- Short description that describes the taskendpointtype
- What type of endpoint (type) to use for indexing (item, user, option, event or role)invariable
- Input (array) variable to transformmapping
- JSON structure (item, user, option, event or role) to be transformed. Use#variable_name#
to address variables inside theinvariable
variable. See example below.outvariable
- Name of transformed output variabletag
- Tag(s) for the task (can be used for exporting configuration)
Transform mapping example for (customer) item
{
"key": "customer-#CustomerNumber#",
"name" : "#Name#",
"tag" : ["customer", "company"],
"type" : "customer",
"attribute" : {
"invoiceaddress": "#Address1#",
"invoicezipcode": "#ZipCode#",
"invoicecity": "#City#"
}
}
Transform mapping example of multi level variable/object
Here is the object to map from:
{
"key": "p1000229",
"name": "P1000229 - Lenovo ThinkStation S20",
"status": "In use",
"attribute": {
"serialnumber": "L3BB911",
"model": "Lenovo ThinkStation S20",
"costcenter": "Sales",
"_sys_id": "33c1fa8837f3100044e0bfc8bcbe5def",
"company": "ACME North America",
"location": "650 Dennery Road #102, San Diego,CA",
"vendor": "Lenovo",
"department": "Sales",
}
}
Here is the mapping configuration:
{
"key": "#key#",
"name": "#name#",
"attribute": {
"company": "#attribute.company#"
}
}
And here is the result:
{
"key": "p1000229",
"name": "P1000229 - Lenovo ThinkStation S20",
"attribute": {
"company": "ACME North America"
}
}
Index task
Index data (items. users, options, roles or events)
name
- Name of the taskdescription
- Short description that describes the taskendpointtype
- What type of endpoint (type) to use for indexing (item, user, option, event or role)indexmethod
- Method to use when indexing objects (POST = Create new or replace objects, PUT = Create new or update objects or PATCH = Update existing objects)invariable
- Input (array) variable of objects to indexalwaysupdate
- Check local cache (MD5 hash) if object needs to updated in index or not. Normally check if full index pipeline and uncheck if delta index pipeline.tag
- Tag(s) for the task (can be used for exporting configuration)
Script task
Supports custom scripts (PowerShell)
name
- Name of the taskdescription
- Short description that describes the taskscriptsource
- Inline script or file based scriptscript
- Type the script here ifinline
or type the full path for the script iffile
tag
- Tag(s) for the task (can be used for exporting configuration)
Purge task
Purge obsolete data from the index
name
- Name of the taskdescription
- Short description that describes the taskendpointtype
- What type of endpoint (type) to use for indexing (item, user, option, event or role)query
- Enter query params based onendpointtype
. See examples below.tag
- Tag(s) for the task (can be used for exporting configuration)
Purge ALL objects
query=*
Purge company items not modified in the last 2 hours
query=type%3Acompany%20AND%20modifieddate%3A%5B*%20TO%20now-2h%5D
Pipelines
Create new pipeline
Build pipelines based on one or more tasks.
name
- Name of the pipelinekey
- Unique key for the pipeline that will be used when running the pipelinedescription
- Short description that describes the pipelinerole
- Users with this role have access to running the pipeline. If not set, everyone can run this pipelinetag
- Tag(s) for the pipeline (can be used for exporting configuration)tasks.tasktype
- What type of task to includetasks.task
- Select the tasktasks.order
- In what order should the task be run (required and must be unique)tasks.required
- If the task is required and therefore should not generate and error
Example pipeline
- Reader task: Read customers from CRM system
- Transform task: Map customer fields
- Index task: Index customers from CRM
- Reader task: Read customers from ERP system
- Script task: Update special customer fields
- Index task: Update customers from ERP
- Purge task: Delete obsolete customers from index
Run Connector Pipeline
You can run/execute a pipeline via API via the /my/connector/pipelines/{key}/run
endpoint or via command line, see examples below.
Run Connector for Onify API v2
.\runpipeline.ps1 -pipeline <pipeline-key> -api_url <api-url>/api/v2 -api_token <api-token>
Run Connector for Onify API v1
You need to specify locale
for API v1.
.\runpipeline.ps1 -pipeline <pipeline-key> -api_url <api-url>/api/v1 -api_token <api-token> -locale <locale>
Updated 8 months ago