Features and changelog
Upcoming
Improved UI for pipelines
Past Updates
Dec 12, 2024:
NewWhen an experiment is in “debug mode”, errors will be shown to the user in the chat UI.
Nov 6, 2024:
CHANGE Small UI update to 1. auto populate channel names with the experiment’s name and 2. improve the help text
NewAdded the ability to toggle whether responses should include source citations in the case of assistant based experiments. To toggle this, you’ll need to have the “assistant” type selected in the experiment’s edit screen, then go to the “Advanced” tab. There you will see a “Citations enabled” setting. When this is setting is “off”, no citations will be present in the response.
Oct 29, 2024:
CHANGE[Security update] Participants will now have to verify their email address before they can continue to the chat session.
Aug 16, 2024:
CHANGEAPI update: Ability to filter experiment session results using a newly supported “tags” parameter.
Aug 16, 2024:
New Tracing
NewDynamic voice support in a multi bot setup. Users can now choose to let the child bot’s voice be used if it generated the output in a multi-bot setup. This behaviour is disabled by default, but can be enabled by going to the experiment’s voice configuration settings and checking the Use processor bot voice box.
Aug 14, 2024:
bugFixed tagging in the case where assistants were used as child bots in a router setup
CHANGEAssistants cannot be used as router bots anymore, since this messes up the conversation history on OpenAI’s side.
Aug 5, 2024:
New Allow sorting of the “Started“ and “Last Message“ columns in the experiment sessions view
NewTerminal bot. You can now configure a bot (from an experiment’s homepage) to run at the end of every inference call. This terminal bot will change the input according to the configured prompt, even in a multi-bot configuration. A terminal bot is useful when you want to ensure that the bot always responds in a certain language.
CHANGEThe
{source_material}
and{participant_data}
prompt variables can now only be used once in a prompt. Including this variable more than once in the same prompt will not be allowed.bug Fixed an issue where assistant generated Word (.docx) files (and possibly others) were being corrupted.
Aug 1, 2024:
Improved data extraction to handle long contexts
July 26, 2024:
newFile download support for assistant bots. Cited files that were uploaded by the user or generated by the assistant can now be downloaded from within the chat. Please note that this only applies to webchats, and the user must be a logged in user to download these files.
new Twilio numbers will now be verified at the selected provider account before allowing the user to link the whatsapp account to an experiement. Please note that this will not be done for Turn.io numbers, since they do not provide a mechanism for checking numbers.
July 19, 2024:
new In-conversation file uploads for assistant bots on web based chats. These files will be scoped only to the current chat/OpenAI thread, so other sessions with the same bot will not be able to query these files.
July 15, 2024:
New Participant data extraction through pipelines.
You need the “Pipelines” feature flag enabled.
Usage information can be found here.
bug Normalize numbers when adding or updating a whatsapp channel. This help to avoid accidentally creating another whatsapp channel with the same number that is in a different format.
bugVerify Telegram token when adding a telegram channel
July 8, 2024:
NEW Add
{current_datetime}
prompt variable to inject the current date and time into the promptbug Fixed a bug with syncing files to OpenAI assistants vector stores
bug Ensure API keys have the current team attached to them
bug Enforce team slugs to be lowercase
Bug Update chat history compression to take the full prompt into account including source material, participant data etc.
CHANGERedo UI to show team on all pages and use dropdown for team switching and team management links
CHANGEHide API sessions from ‘My Sessions’ list
API changes:
NEWAPI documentation and schema now available at https://chatbots.dimagi.com/api/docs/
NEWExperiment session list, detail and create API
CHANGE Channel message API now takes an optional
session
field in the request body to specify the ID of a specific sessionCHANGE Experiment API output renamed
experiment_id
toid
CHANGE Update participant data API POST body format changed. New format:
[ {"experiment": "<experiment_id>", "data": {"property1": "value1"}} ]
NEWOpenAI compatible ‘Chat Completions’ API for experiments. See API docs.
NEWUser interface for creating and editing Experiment Routes (parent / child experiments)
New tab on the main experiment page
Jun 24, 2024
Tagging messages based on which bot generated that response in a multi-bot setup
Jun 17, 2024
Pipelines v2
Look for “Pipelines“ in the sidebar
Ability to run a pipeline based on event triggers
Ability to create a pipeline visually
Participant view
View participants with their data
We added a view where users can see all participants, when they joined initially and what experiments they participanted in.
Experiment admins will also be able to add or update data to be associated with a participant. When the experiment prompt includes the
{participant_data}
variable, this data will be visible to the bot and participant specific details can be referenced or considered during the conversation.Note that participant data is scoped to the specific experiment. This means that you have to manually add data pertaining to a particular participant to each experiment manually.
Slack integration.
Ability to connect to a slack workspace as a messaging provider
Ability to link an experiment to a Slack Channel (via an experiment channel)
Ability to have one ‘fallback’ experiment which can respond to messages on channels where no other experiment is assigned
Experiment responds to ‘mentions’ and creates a new session as a Slack thread
Jun 4, 2024
Individual tool selection
Users can now choose which tools to give to the bot to use.
Previously this was obscured by a single “tools enabled” checkbox which - when enabled - gave the bot all the tools that existed at that time.
Tools include: One-off Reminder, Recurring Reminder and Schedule Update
One-off Reminder: This allows the bot to create a one time reminder message for some time in the future.
Recurring Reminder: This allows the bot to create recurring reminder messages.
Schedule Update: This allows the bot to update existing scheduled messages Please note that this tool cannot update reminders created with the one-off and recurring reminder tools, since those are legacy tools using a different framework. Future work will fix this.
Keeping users in the loop
When and error occurs during the processing of a user message, the bot will tell the user that something went wrong instead of keeping quiet
May 21, 2024
OpenAI Assistants v2 support
Support for the new OpenAI assistants version 2 API. Version 2 allows users to attach up to 10 000 files! See this page for more information.
Multi-experiment bots
Allows users to combine multiple bots into a single bot, called the “parent” bot. The parent bot will decide which of the “child” bots the user query should be routed to, based on the prompt.
The prompt should give clear instruction to the parent bot on how it should decide to route the user query. For example, it
Example usage can be found at Multi-bot setup.
May 15, 2024
Experiment search improvements
The search string will be used to search for experiments by name first, then by description
OpenChatStudio API
May 10, 2024
Bots are given knowledge of the date and time
Added a new event type called “A new participant joined the experiment”
Added a new event handler to create scheduled messages
May 9, 2024
Show participant data in the session review page
If participant data exists and was included in the prompt, then reviewers will be able to see the data in the session review page. Seeing the data that the bot had access to might help with understanding the conversation.
Anthropic now also supports tool usage!
Apr 30, 2024
Participant data
Participant data allows the bot to tailor responses by considering details about the participant.
To include participant data in the prompt, go to the experiment edit page and add the {participant_data} variable to an appropriate place in the prompt.
Where does this data come from? Currently we have to create it manually through the admin dashboard, but a near future release will include a dedicated page to view/edit participant data.
Please note that participant data is experiment specific, meaning that data we have for a participant in one experiment may not be the same for the next experiment.