Execute a pipeline

An execution of a pipeline creates an execution on the platform. Each execution is associated with a pipeline with the definition of the values of its inputs and outputs. The execution triggers the execution of the pipeline on one or more Kubernetes containers using the computational resources available on the environment. All the results and artifacts of the execution can be retrieved in the Execution Tracking tab.

There are two ways to execute a pipeline:

  • by creating a deployment: the execution will then depend on the selected execution rule and will be performed when the execution condition is met (call for an endpoint, periodicity for a CRON, etc…)

  • by running it instantly with the sdk: It is then necessary to indicate the values for each input of the pipeline.

Summary

  1. Run a pipeline

  2. Trigger a deployment by endpoint with SDK Craft AI

  3. Trigger a deployment by endpoint with request

Function name

Method

Return type

Description

run_pipeline

run_pipeline(pipeline_name, inputs=None, inputs_mapping=None, outputs_mapping=None)

dict

Executes the pipeline on the platform.

Run a pipeline

A run is an execution of a pipeline on the platform. SDK function that runs a pipeline to create an execution.

run_pipeline(pipeline_name, inputs=None, inputs_mapping=None, outputs_mapping=None)

Parameters

  • pipeline_name (str) – Name of an existing pipeline.

  • inputs (dict, optional) - Dictionary of inputs to pass to the pipeline with input names as dict keys and corresponding values as dict values. For files, the value should be the path to the file or a file content as an instance of io.IOBase. Defaults to None.

  • inputs_mapping (list of instances of [InputSource]{.title-ref}) - List of input mappings, to map pipeline inputs to different sources (such as environment variables). See [InputSource]{.title-ref} for more details.

  • outputs_mapping (list of instances of [OutputDestination]{.title-ref}) - List of output mappings, to map pipeline outputs to different destinations (such as datastore). See [OutputDestination]{.title-ref} for more details.

Returns

Created pipeline execution represented as dict with output_names as keys and corresponding values as values.

Trigger a deployment with execution rule by endpoint with SDK Craft AI

SDK function that triggers the deployment of our pipeline.

sdk.trigger_endpoint(*endpoint_name*, *endpoint_token*, inputs={},
wait_for_results=True)

Parameters

  • endpoint_name (str) – Name of the endpoint.

  • endpoint_token (str) – Token to access endpoint.

  • inputs (dict) - Inputs value for endpoint call.

  • wait_for_results (bool) – Automatically call retrieve_endpoint_results (True by default)

Returns

Created pipeline execution represented as dict.

Trigger a deployment with execution rule by endpoint with request

For trigger a deployment who is set up with an endpoint, you can also send request with your element defined in the pipeline input.

Examples in Python for variable :

import requests

r = requests.post(
    "https://your_environment_url/my_endpoint",
    json={
        "input1": "value1",
        "input2": [1,2,3]
        "input3": False
    },
    headers={"Authorization": "EndpointToken " + ENDPOINT_TOKEN }
)

Examples in Python for file (not available with auto mapping) :

import requests

r = requests.post(
    "https://your_environment_url/my_multistep_endpoint",
    files={"data": open("my_file.txt", "rb")},
    headers={"Authorization": "EndpointToken " + ENDPOINT_TOKEN }
)

Warning

We have explained in this documentation how to trigger the endpoint with Python, but you can obviously send a request from any tool (curl, postman, JavaScript, …).