Notice: This Wiki is now read only and edits are no longer possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.
SMILA/Documentation/WorkerAndWorkflows
Workers and Workflows
Please note that job manager element names (like workers and workflows) must conform to the job manager naming convention:
- names must inly consist of the following characters: a-zA-Z._-
If they do not conform, they won't be accessible in SMILA.
- Pushing elements with invalid names will result in a 400 Bad Request,
- predefined elements with invalid names won't be loaded, a warning will be logged in the SMILA.log file.
E.g.
... WARN ... internal.DefinitionPersistenceImpl - Error parsing predefined worker definitions from configuration area org.eclipse.smila.common.exceptions.InvalidDefinitionException: Value 'worker#1' in field 'name' is not valid: A name must match pattern ^[a-zA-Z0-9-_\.]+$.
Workers
Worker definition
A worker definition describes the input and output behavior as well as the required parameters of a worker. The definitions are provided with the software and must be known in the system before a worker can be added as an action to a workflow. They cannot be added or edited at runtime and are therefore not intended to be manipulated by the user.
Typically, a worker definition consists of the following parts:
- A parameter section declaring the worker's parameters: These parameters must be set either in the workflow or in the job definition when using this worker.
- An input slot describing the type of input objects that the worker is able to consume: All input slots must be connected to buckets in a workflow definition that wants to use this worker.
- An output slot describing the type of output objects that the worker generates: All output slots must be connected to buckets in a workflow definition that wants to use this worker. An exception to this rule are output slots that were marked as optional in the worker definition or output slots that belong to another slot group (see below).
Slot groups
As an advanced feature, output slots can be associated with a group label. Slots having the same group label then belong to the same group. Grouping is used to define which slots can be used together in the same workflow and which not. Whereas slots that were not associated with a group label can be combined freely because they belong to each group implicitly, it is not possible to use slots from different groups in the same workflow. When using groups, the rules concerning optional and mandatory output slots are as follows:
- A mandatory slot without a group label must always be connected to a bucket.
- An optional slot without a group label is allowed in any combination with other any group slot.
- If a particular group shall be used, all mandatory slots of the group must be connected to a bucket.
- If each group contains at least one mandatory slot, at least one group must be connected. It is not possible then to connect the slots without a group label only.
Worker properties in detail
- name: Required. Defines the name of the worker. Can be used in a workflow to add the worker as an action.
- modes: Optional. Sets a mode in the worker, controlling a special behavior. Currently availably:
- bulkSource: describes workers like the Bulkbuilder that get data from somewhere not under JobManager control (e.g. an external API). Such workers are needed to start jobs in standard mode without an input bucket. They are not waiting for tasks to appear in the TaskManager queues for them, but require the JobManager to create "initial tasks" for them on-demand.
- autoCommit: if this worker fails to process a tasks, the Job Manager will not retry that task but finishes it as successfully and create follow-up tasks based on what the worker has produced already. An example is the Bulkbuilder again: If a client sends a record to the Bulkbuilder, it assumes that it will be processed, so if the Bulkbuilder fails to finish the task correctly, the records already added to a bulk must be processed by the job nonetheless.
- runAlways: Task delivery to this worker should not be limited by scale-up control: If tasks are available the worker will be allowed to process as much tasks as the scaleUp limit for this worker specifies (by default 1), this will not be prevented if the global task scale-up limit for this node is already reached. So this mode should be used only for workers that perform very important tasks that should not be delayed too much (the internal "_finishingTasks" worker is an example), but you should be careful to increase the scaleUp limit for such workers because this can easily result in an overload on a node. Especially you should be very cautious about runAlways workers that have long-running tasks which are computationally intensive.
- requestsCompletion: describes a worker that wants to add completion tasks after the normal job tasks have finished. See Job Run Life Cycle for details and Import Delta Delete for an example.
- barrier: Tasks for this worker will not be generated before all tasks for workers have been finished that occur in the same workflow run before this worker. Then, all bulks created so far in this workflow run in all buckets connected to the slot will be given to the task generator of this worker in a single call. The purpose of this mode is to support MapReduce workflows. Using more than one barrier workers in a single workflow is supported. Cycles with barriers have not yet been tested ;-)
- parameters: Optional. Contains a description of the worker's parameter, i.e. the supported parameters, the possible types, cardinalities and values, etc. See SMILA/Documentation/ParameterDefinition for details.
- taskGenerator: Optional. Defines the name of the OSGi service which should be used to create the tasks whenever there are changes in the respective input buckets. If the taskGenerator is not set, the default task generator is used.
- input: Optional. Describes the input slots:
- name: Gives the name of a slot. Has to be bound as a parameter key to an existing bucket in a workflow.
- type: Gives the required data object type of the input slot. The bucket bound in an actual workflow must comply with this type.
- modes: Sets the mode(s) of the respective input slot, controlling a special behavior.
- qualifier: When set, worker uses "Conditional GET" to select tasks with certain objects for this input slot.
- optional: When set, no error will occur when adding a workflow that uses this worker as start worker without an input bucket.
- the worker has to check whether the input slot is connected instead of accessing it directly.
- output: Optional. Describes the output slots:
- name: Gives the name of the slot. Has to be bound as a parameter key to an existing bucket in a workflow.
- type: Gives the required data object type of the output slot. The bucket bound in an actual workflow must comply with this type.
- group: Gives the group label of this slot (see above).
- modes: Sets the mode(s) of the respective output slot, controlling a special behavior.
- optional: When set, no error will occur when adding a workflow that does not bind the output slot to a bucket.
- multiple: When set, the number of output bulks is not predefined by job management. Instead, worker can create multiple output bulks based on an object id prefix given by job management.
- maybeEmpty: When set, no error will occur when a worker doesn't create an output bulk for a processed task.
- appendable: When set, bulk has to be created for failsafe append.
Worker definitions can include additional information (e.g. comments or layout information for graphical design tools, etc.), but a GET request will return only relevant information (i.e. the above attributes). If you want to retrieve the additional info that is present in the json file, add returnDetails=true as request parameter.
Example
An exemplary worker definition:
{ "name" : "exampleWorker", "readOnly": true, "parameters":[ { "name": "parameter1" , "optional": "true"}, { "name": "parameter2" } ], "input" : [ { "name" : "inputRecords", "type" : "recordBulks" } ], "output" : [ { "name" : "outputRecords", "type" : "recordBulks" }] }
As workers currently can be defined in the system configuration only, they are all marked as "readOnly" (see SMILA/Documentation/JobManagerConfiguration).
A more complex sample:
{ "workers":[ { "name":"worker", "parameters":[ { "name":"stringParam", "optional":true, "description":"optional string parameter with default type 'string'" }, { "name":"booleanParam", "type":"boolean", "description":"boolean parameter" }, { "name":"enumParam", "type":"string", "values":[ "val1", "val2" ], "optional":true, "description":"optional enum parameter with values 'val1' or 'val2'" }, { "name":"mapParam", "type":"map", "entries":[ { "name":"key1", "type":"string" }, { "name":"key2", "type":"string" } ], "description":"map parameter with two entries of type string and keys 'key1' and 'key2'" }, { "name":"sequenceOfStringsParam", "type":"string", "multi":true, "description":"a sequence of string parameters" }, { "name":"<something>", "type":"string", "description":"additional parameter with unspecified name" }, { "name":"anyParam", "type":"any", "optional":true, "description":"optional parameter with an 'Any' value" } ] } ] }
List workers
All workers
Use a GET request to list all worker definitions.
Supported operations:
- GET: Returns a list of all worker definitions. If you want to retrieve the additional information (if present), add returnDetails=true as request parameter.
Usage:
- URL: http://<hostname>:8080/smila/jobmanager/workers/
- Allowed methods:
- GET
- Response status codes:
- 200 OK: Upon successful execution.
Specific worker
Use a GET request to list the definition of a specific worker.
Supported operations:
- GET: Returns the definition of the given worker. Optional parameter: returnDetails: true or false (default)
Usage:
- URL: http://<hostname>:8080/smila/jobmanager/workers/<worker-name>/
- Allowed methods:
- GET
- Response status codes:
- 200 OK: Upon successful execution.
Workflows
Workflow definition
A workflow definition describes the individual actions of an asynchronous workflow by connecting workers to input and output slots. Which slots have to be connected depends on the workers you are using and is defined by the worker definition. Typically, all input and output slots of a used worker must be associated to buckets. And, the type of the connected bucket must match that defined in the worker's definition.
A workflow run starts with the start-action. The order of the other actions is determined by their inputs and outputs.
Connecting a workflow to another workflow
A workflow can be linked to another workflow when both share the same persistent bucket. To give an example, let's assume a workflow named A and a workflow named B sharing the same bucket. If the first workflow A then adds an object into the shared bucket, the second workflow B is triggered to process this data. To be able to connect workflow A and B, the following prerequisites must be fulfilled:
- The shared bucket must be a persistent one.
- The definition of workflow A must define the shared bucket as an output bucket of an action. This can be any action in the workflow chain, hence, not necessarily the first or the last one.
- The definition of workflow B must state the shared bucket as the input bucket of its start action. Other positions in the workflow definition will not do.
- Individual jobs must be created for both the triggering (A) and the triggered workflow (B).
- The parameters used for the store and object name in the data object type definition of the shared bucket must be identical in both job definitions.
- The job runs must fulfill the following conditions to allow for the triggering of a connected workflow:
- The status of the job run using workflow A must be RUNNING or FINISHING.
- The status of the job run using workflow B must be RUNNING.
Warning: As there is no explicit chaining of workflows, you have to be very careful when using the same bucket name in multiple workflow definitions. This might result in the triggering of jobs which were not meant to be triggered at all.
Workflow properties in detail
Description of a workflow:
- name: Required. Gives the name of the workflow.
- modes (LIST): Optional. Restricts the modes a job referring to this workflow can be started in and defines the default mode. Possible modes are standard and runOnce.
- if no modes are given and overwritten in a job definition, all modes can be used to start this workflow in a job, default mode will be standard if no mode is explicitly provided at job start.
- the first mode in this list will be used as the default job run mode (i.e. if no mode is provided during job start).
- a modes section in the workflow can be restricted (or the default mode can be changed) with a modes section in the job definition, but can not be expanded in the job definition. See Job modes for more information.
- parameters (MAP): Optional. Sets the global workflow parameters. They apply to all actions in the workflow as well as to the buckets used by these workers.
- startAction (MAP): Required. Defines the starting action of the workflow. There can be only one starting action within the workflow.
- actions (LIST of MAPs): Optional. Defines the follow-up actions of the workflow.
- timestamp: The (readonly) timestamp that is created by the system when the workflow has been pushed to the system (initial creation or last update). Read-Only workflows (i.e. workflows initially loaded from workflow.json file have no timestamp property. The value cannot be set manually, it is system defined.
- Additional properties can be provided, but will only be listed when returnDetails is set to true. This could be used by a designer tool to add layout information or comments.
Description of startAction and actions:
- worker: Gives the name of a worker. This name must comply with the name given in the worker definition.
- parameters: Sets the local worker parameters. The apply to the referenced worker but not to the buckets used by this worker.
- input (MAP): Maps the worker's named input slot(s) to an existing bucket definition. The name of an input slot must be the key and the name of the bucket must be the value of that key. All of the worker's named input slots have to be resolved against existing buckets of the expected type.
- output (MAP): Maps the worker's named output slot(s) to an existing bucket definition. The name of an output slot must be the key and the name of the bucket must be the value of that key. All of the worker's named output slots have to be resolved against existing buckets of the expected type.
Workflow definitions can include additional information (e.g. comments or layout information for graphical design tools, etc.), but a GET request will return only relevant information (i.e. the above attributes). If you want to retrieve the additional info that is present in the json file or has been posted with the definition, add returnDetails=true as request parameter.
Non-forking workflows
Workflows are called "non-forking" if no two workers in the workflow share the same input bucket. This has an impact on the clean up of temporary objects during a job run. For non-forking workflows, the removal of an input object from a transient bucket will be done directly after the worker has successfully finished its task. For forking workflows, this clean up will be done not before the whole workflow run has finished.
Example
An exemplary workflow definition:
{ "name":"myWorkflow", "modes": ["runOnce","standard"], "parameters":{ "paramKey2":"paramValue2", "paramKey1":"paramValue2" }, "startAction":{ "worker":"worker1", "input":{ "slotA":"myBucketA" }, "output":{ "slotB":"myBucketB" } }, "actions":[ { "worker":"worker2", "parameters":{ "paramKey3":"paramValue3" }, "input":{ "slotC":"myBucketB" }, "output":{ "slotD":"myBucketC" } }, { "worker":"worker3", "input":{ "slotE":"myBucketC" }, "output":{ "slotF":"myBucketD" } } ], "timestamp" : "2011-07-25T08:57:47.628+0200" }
List, create, and modify workflows
All workflows
Use a GET request to list the definitions of all workflows. If the timestamps (if present) or any other additional information contained in the definition should also be displayed, the request parameter returnDetails must be set to true. Use POST for adding or updating a workflow definition.
Supported operations:
- GET: Returns a list of all workflow definitions. If there are no workflows defined, you will get an empty list. Optional request parameter: returnDetails: true or false (default).
- POST: Create a new workflow definition or update an existing one. If the workflow already exists, it will be updated after successful validation. However, the changes will not apply until the next job run, i.e. the current job run is not influenced by the changes. Only workers for which worker definitions exist can be added to the workflow definition as actions. When adding a worker, all parameters defined in the worker's definition have to be satisfied. If not in the global or local sections of the workflow definition itself, then later in the job definition. Also, all input and output slots have to be connected to existing buckets if they are persistent ones or at least a bucket name must be provided in case of transient ones. Expceptions to this rule are optional slots or those of other slot groups which need not and must not (in the latter case) be connected to buckets. An error will be thrown:
- If a required slot is not connected to a bucket.
- If a referenced bucket, defined as persistent one, does not exist.
Usage:
- URL: http://<hostname>:8080/smila/jobmanager/workflows.
- Allowed methods:
- GET
- POST
- Response status codes:
- 200 OK: Upon successful execution (GET).
- 201 CREATED: Upon successful execution (POST).
- 400 Bad Request: name, startAction are mandatory fields. If they are not set or the name is invalid, an HTTP 400 Bad Request including an error message in the response body will be returned. If a workflow update is requested but results in an error during job validation, then the update will fail with response status 400, as well.
Specific workflow
Use a GET request to retrieve the defintion of a specific workflow. Use DELETE to delete a specific workflow.
Supported operations:
- GET: Returns the definition of the given workflow.
- You can set the URL parameter returnDetails to true to return additional information that might have been provided when creating the workflow. If the parameter is ommitted or set to false only the relevant information (name, parameters, startAction, actions, timestamp) is gathered.
- DELETE: Deletes the given workflow.
Usage:
- URL: http://<hostname>:8080/smila/jobmanager/workflows/
- Allowed methods:
- GET
- DELETE
- Response status codes:
- 200 OK: Upon successful execution (GET, DELETE). If the workflow to be deleted does not exist, you will get 200 anyway (DELETE).
- 404 Not Found: If the workflow does not exist (GET).
- 400 Bad Request: If the workflow to be deleted is stil referenced by a job definition (DELETE).