3.1 Part 1: Deploy a simple pipeline
The main goal of the Craft AI platform is to allow to deploy easily your machine learning pipelines. In this first part, you will learn how you can deploy some simple code to the platform in a few “stages”.
3.1.1 Overview of the first App
In this part we will use the platform to build a simple “hello world” application by showing you how to deploy a basic Python code that prints “Hello world” and displays the number of days until 2024.
The code of our application is defined in the following function:
import datetime
def helloWorld() -> None:
now = datetime.datetime.now()
difference = datetime.datetime(2024, 1, 1) - now
print(f"Hello world ! Number of days to 2024 : {difference}")
Although the platform is built to deploy complex ML code, for the sake of simplicity we are going to start with this simple code that you might be familiar with. (Don’t worry, we will use more realistic examples later).
To build a machine learning application with the craft ai platform, you must turn your code into a pipeline.
❓ A pipeline is a machine learning workflow, consisting of one or more steps, that can be easily deployed on the Craft AI platform. Like a regular function, a step is defined by the inputs it ingests, the code it runs, and the outputs it returns. You can then create a full pipeline formed with a directed acyclic graph (DAG) by specifying the output of one step as the input of another step.
Foreword
At the moment, the platform allows the user to create a pipeline with a unique step inside. In the future releases, more options will be available
In this part you will learn how to:
Package your application code into a step on the platform
Embed it in a Pipeline
Deploy it so it can be triggered via an endpoint
Check the logs of the executions on the web interface
You can test this example directly on the platform to understand the mechanics presented. All the code used in this tutorial can be found here
3.1.2 Step creation with the SDK
The first thing to do to build an application on the Craft AI platform is to create a Step.

A Step is the equivalent of a Python function in the Craft AI platform. Like a regular function, a step is defined by the inputs it ingests, the code it runs, and the outputs it returns. For this “hello world” use case, we are focusing on the code part so we will ignore inputs and outputs for now.
A step is created from any function located in the source code on your
repository, using the create_step()
method of thesdk
object.
It is very important to understand that the platform can only create steps from the code present in your GitHub repository in the branch specified during setup. If you have some uncommitted changes, they won’t be taken into account at step creation.
In our case, if we suppose that our helloworld
function is located
in src/1_hello_world.py
we can create our first step on the platform
as follow:
sdk.create_step(
step_name="part-1-hello-world-step",
function_path="src/part-1-helloWorld.py",
function_name="helloWorld"
)
Its main arguments are the following:
The
step_name
is the name of the step that will be created. This is the identifier you will use later to refer to this step.The
function_path
argument is the path of the Python module containing the function that you want to execute for this step. This path must be relative to the root of the git repository.The
function_name
argument is the name of the function that you want to execute for this step.
The above code should give you the following output:
>>> Please wait while step is being created. This may take a while...
>>> Steps creation succeeded
>>> {'name': 'part-1-hello-world-step'}
You can view the list of steps that you created in the platform with the
list_steps()
function of the SDK.
step_list = sdk.list_steps()
print(step_list)
>>> [{'name': 'part-1-hello-world-step',
>>> 'created_at': '2023-01-16T14:41:04.943Z',
>>> 'updated_at': '2023-01-16T14:41:04.943Z',
>>> 'status': 'Ready'}]
You can see your step and his status of creation (Pending
or
Ready
).
You can also get the information of a specific step with the
get_step()
function of the SDK.
step_info = sdk.get_step("part-1-hello-world-step")
print(step_info)
>>> {'name': 'part-1-hello-world-step',
>>> 'created_at': '2023-01-16T14:41:04.943Z',
>>> 'updated_at': '2023-01-16T14:41:04.943Z',
>>> 'status': 'Ready',
>>> 'inputs': [],
>>> 'outputs': []}
3.1.3 Create a pipeline

The step part-1-hello-world
containing our helloWorld
code is
now created in the platform and ready to be used in a pipeline that
we will then deploy in the platform as an endpoint. In the future,
it will be possible to assemble multiple steps into a complex machine
learning pipeline. For now, the platform only allows single step
pipelines.
To create a pipeline consisting of the previous step, you must use the
create_pipeline()
function of the SDK.
sdk.create_pipeline(
pipeline_name="part-1-hello-world-pipeline",
step_name="part-1-hello-world-step",
)
This function has two arguments:
the
pipeline_name
is the name of the pipeline you have just created. As for thestep_name
you will then refer to the pipeline using this namethe
step_name
is the name of the step used in the pipeline.
After executing this function, you should see the following output :
>>> Pipeline creation succeeded
>>> {'pipeline_name': 'part-1-hello-world-pipeline',
>>> 'created_at': '2023-02-09T10:15:12.566Z',
>>> 'steps': ['part-1-hello-world-step'],
>>> 'open_inputs': [],
>>> 'open_outputs': []}
.
Now that our pipeline is created (around our step), we want to execute it. To do this, we will create an endpoint that allows us to easily execute the code contained in the step.
3.1.4 Deploy your Pipeline through an Endpoint

An Endpoint is a publicly accessible URL that launches the execution of the Pipeline.
Without the platform, you would need to write an api with a library like Flask, Fast API or Django and deploy it on a server that you would have to maintain.
With the platform you can create an Endpoint a simple call to the
create_deployment()
function of the SDK, by choosing an
endpoint_name
. This name is used to reference the created endpoint
and is further used in its URL.
endpoint = sdk.create_deployment(
pipeline_name="part-1-hello-world-pipeline",
deployment_name="part-1-hello-world-endpoint",
execution_rule="endpoint"
)
print(endpoint)
You should see the following output:
>>> {'name': 'part-1-hello-world-endpoint', 'endpoint_token': '162nLmwmPAB7fXk2-29_pRyfA3WvC-kprUeLn9VARwRibWPUqWU1HkR7Lbe0Ji0wjJp0e9kBa2maXZ_igJHK_g'}
>>> Endpoint creation succeeded
You can check the endpoints you created using the function
sdk.list_deployments()
:
sdk.list_deployments()
>>> [{'name': 'part-1-hello-world-endpoint',
>>> 'pipeline': {'name': 'part-1-hello-world-pipeline'},
>>> 'id': 'c7ed29ab-2ba5-48e8-bbf1-d46ee68ee888',
>>> 'executions_count': 0}]
3.1.5 Call your endpoint to execute the pipeline
Once your endpoint is created, you can execute it with a direct HTTP call with the endpoint token. Calling/triggering an endpoint allows us to execute the associated pipeline. Using Python, it can be executed with:
import requests
endpoint_URL = sdk.base_environment_url + "/endpoints/" + endpoint["name"]
headers = {"Authorization": "EndpointToken " + endpoint["endpoint_token"]}
request = requests.post(endpoint_URL, headers=headers)
print(request.status_code)
>>> 200
The HTTP code 200 indicates that the request has been taken into account. In case of an error, we can expect an error code starting with 4XX or 5XX.
But, obviously, you can execute it in any other way (curl command in bash, Postman…).
Now, we have executed the pipeline by calling the endpoint. The response of this request allows us to see that the order is well passed, however it does not give us the status of the execution nor the logs of the execution (we can receive outputs with the return of the endpoint, but we do not put any here).
It is possible to have the returns of the executions with the SDK, let’s see how it works.
3.1.6 Check execution status and logs
Once your pipeline is executed through the endpoint, you can now see the
pipeline executions with the sdk.list_pipeline_executions()
command.
sdk.list_pipeline_executions(
pipeline_name="part-1-hello-world-pipeline"
)
>>> [{'execution_id': 'part-1-hello-world-pipeline-XXXX',
>>> 'status': 'Succeeded',
>>> 'created_at': '2023-02-01T09:21:19.020Z',
>>> 'end_date': '2023-02-01T09:21:29.000Z',
>>> 'created_by': 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx',
>>> 'pipeline_name': 'get-started-part1-pipeline',
>>> 'deployment_id': 'xxxxxxxx-xxxx-xxxx-xxx-xxxxxxxx',
>>> 'steps':
>>> [{'name': 'get-started-part1-hello-world',
>>> 'status': 'Succeeded',
>>> 'end_date': '2023-02-01T09:21:24.000Z',
>>> 'start_date': '2023-02-01T09:21:19.000Z'}]}]
Furthermore, you can get the specific logs of an execution with the
sdk.get_pipeline_execution_logs()
command. You will have to fill
in the execution ID, which can be found with the previous command. The
logs are given to us line by line in JSON, however, we can display them
clearly with the command in the print()
below. This way, we will
also be able to see the error messages of the step code through this if
the execution of the code encounters any.
pipeline_executions = sdk.list_pipeline_executions(
pipeline_name="part-1-hello-world-pipeline"
)
logs = sdk.get_pipeline_execution_logs(
pipeline_name="part-1-hello-world-pipeline",
execution_id=pipeline_executions[-1]['execution_id'] # [-1] to get the last execution
)
print('\n'.join(log["message"] for log in logs))
>>> Please wait while logs are being downloaded. This may take a while…
>>> Hello world ! Number of days to 2024 : 334
To be able to find more easily the list of executions as well as the information and associated logs, you can use the user interface, as follows:
Connect to https://mlops-platform.craft.ai
Click on your project
Click on the Execution page
Click on “Select an execution”: this displays the list of environments.
Select your environment to get the list of endpoints:
Finally, click on an endpoint name to get its executions:
You have two tabs: “General” to get general information about your execution and “Logs” where you can see and download the execution logs:
3.1.6.1 What we have learned:
In this part we learned how to easily build, deploy and use a simple application with the craft platform with the following workflow

These 3 main steps are the fundamental workflow to work with the platform and we will see them over and over throughout this tutorial
3.1.6.2 What next ?
Now that we know how to run our code on the platform, it is time to create more complex steps to have a real ML use case.
Next step : Part 2: Deploy with configuration step