You need to have the “pipelines” and “events“ feature flags enabled to use this feature.
The Pipelines feature enables users to connect different components in a sequence to accomplish a specific objective. To begin, you can easily drag and drop the component(s) and establish the necessary links between them.
One-to-many or many-to-one links are not currently supported. You can only link one component to another.
Components
1. Create a report
Params
LLM Provider ID: To find this value, go to “My Team“ → edit the LLM service provider of your choice → at the end of the URL you’ll find the provider ID. For example, in this url “https://chatbots.dimagi.com/a/<your team>/service_providers/llm/15/” the provider id is 15
LLM model: The available models can be seen on the edit page as well.
LLM temperature: A good default is 1
Prompt: The prompt for the LLM to generate a report. Be sure to include the {input} key somewhere in this prompt. This is the input to this component that should go to the LLM.
2. Render a template
Params
template_string: The template that you want to use. We’re using Jinja templates, so variables are indicated with curly braces around them e.g. {my_variable}. The input to this template should include a mapping of all variable names that are expected in the template.
Usage
I aim to design a template that incorporates the user's first name and last name. The template structure will resemble the following:
The extracted user’s name is {first_name} and surname is {last_name}.
This template necessitates two variables:
first_name
last_name
The input required for this process should be a mapping that contains a value for each variable in the template. It should be structured as follows:
{“first_name”: “John“, “last_name“: “Doe“}
Ideally, it is recommended to execute the Create a Report step prior to this one and instruct the LLM to format the output in a manner that ensures all anticipated variables are present in the output.
3. Send an email
Params
Recipient list
Subject
4. Extract structured data (Basic)
This step is used to extract structured data from the the conversation history.
Params
LLM Provider ID: (See Create a Report for where to find this)
LLM model
LLM temperature
data_schema: The schema describing the data you want to extract. This schema follows the following format:
{ "key1":“description1“, "key2":“description2“, "key3_list":[ { "“key1“":“description1“, "key2":“description2“ } ] }
Usage
Suppose I want to generate a profile for each participant including their first name, last name and any pets they have, my schema should look something like:
{ "first_name": "the first name of the user", "last_name": "the last name of the user", "pets": [ {"name": "the name of the pet", "type": "the type of animal e.g. cat or dog"} ] }
Note that the list should only have one schema entry. This entry (or object) describes the entries that should be included in this list. In the example, we want to have a list of pets, described by their name and type.
4. Update participant memory
This step is utilized to update a participant's memory or data. To access a participant's data, navigate to Participants in the sidebar, then select the specific participant, proceed to the experiment they were involved in, and finally click on the "Participant Data" tab. It is important to observe that the data is structured, representing a mapping between keys and values.
Params
key_name: This refers to the key within the participant data mapping where the input value will be stored.
Usage
Given that participant memory or data consists of a mapping of keys and values, only a value can be stored against a key. Consequently, this step necessitates the presence of a "key_name" if the input is either a string (plain text) or a list. In the scenario where the input is a dictionary (a key-value mapping), specifying a key is not obligatory, although it can be provided if desired.
Hence, if a preceding step generates text as a list, it is imperative to specify a "key_name". Conversely, if the preceding step produces a mapping (such as when utilizing the Extract Structured Data step before this one), indicating a key is not compulsory.
When will my pipeline run?
A pipeline can be initiated by a static trigger. To proceed, you will need to create a new static trigger for your experiment and select the "start a pipeline" option. Subsequently, you will have the opportunity to select the specific pipeline to execute and provide the necessary input.