-
POST /api/rest/2.0/ai/conversation/create
Creates a conversation session with Spotter to generate Answers from a specific data Model. The resulting session sets the context for subsequent queries and responses. -
POST /api/rest/2.0/ai/conversation/{conversation_identifier}/converse
Allows Sending a message or follow-up query to an ongoing conversation session. -
POST /api/rest/2.0/ai/answer/create
Generates an answer for a natural language query specified in the API request.
Spotter AI APIs
ThoughtSpotโs Spotter AI APIs Beta allow users to query and explore data through conversational interactions.
|
Note
|
The Spotter AI APIs are in Beta and disabled by default on ThoughtSpot instances. To enable these APIs on your instance, contact ThoughtSpot Support. |
Overview๐
Spotter AI APIs collectively support natural-language-driven analytics, context-aware and guided data analysis, and integration with agentic systems.
The key capabilities of the Spotter APIs include the following:
-
Initiating and managing conversational sessions
-
Processing natural-language queries and interpreting user intent
-
Generating analytical responses, insights, and visualizations
-
Decomposing complex user queries
Spotter manages conversation sessions, context tracking, and response generation for user-submitted queries. The Spotter APIs are designed for use in Spotter-driven analytics and also for agentic interactions within an orchestrated agent framework.
Locale settings for API requests๐
When using the Single Answer and Send message APIs, the locale used for API requests depends on your applicationโs locale settings:
-
If your application is set to "Use browser language," the API will not apply the default locale. In this case, you must explicitly include the desired locale code in the
Accept-Languageheader of your API request. If you do not specify the locale, the API may not return responses in the expected language or regional format. -
If you have set a specific locale in your ThoughtSpot instance or user profile, the API will use this locale to generate responses, overriding the browser or OS locale.
To ensure consistent localization, set the Accept-Language header in your API requests when relying on browser language detection, or configure the locale explicitly in the user profile settings in ThoughtSpot.
API endpoints๐
Each of the Spotter AI APIs serves a specific function:
| Category | API endpoints |
|---|---|
Conversational analytics with Spotter (Classic) | |
Advanced analytics and agentic interaction |
|
Data literacy and guided analysis |
|
NL instructions to coach Spotter |
|
Per-user API rate limits๐
The following rate limits apply to Spotter agent APIs per user:
-
A maximum of 10 conversation creation requests per minute.
-
A maximum of 30 query messages to a conversation session per minute.
| API endpoint | Rate Limit (per user, per minute) |
|---|---|
| 10 |
| 30 |
| 30 |
If you are integrating these APIs in your environment, consider implementing a retry logic to handle the rate limit errors.
Conversational analytics with Spotter (Classic)๐
In the Spotter classic mode, the conversation session and context will be managed by Spotter. The APIs allow users to interact directly with Spotter with no specific agentic capabilities or framework.
Create a conversation session๐
To create a conversation session with Spotter, send a POST request to the /api/rest/2.0/ai/conversation/create API endpoint. The resulting conversation session maintains the context and can be used to send queries and follow-up questions to generate answers.
Request parameters๐
Include the following parameters in the request body:
| Form parameter | Description |
|---|---|
| String. Required. Specify the GUID of the data source objects such as ThoughtSpot Models. The metadata object specified in the API request will be used as a data source for the conversation. |
| String. To set the context for the conversation, you can specify a set of keywords as token string. For example, |
Example requests๐
With tokens
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/create' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
"tokens": "[sales],[item type],[Jackets]"
}'
Without tokens
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/create' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca"
}'
API response๐
If the API request is successful, a conversation identifier is created. Note the GUID of the conversation and use it when sending follow-up queries.
{"conversation_identifier":"98f9b8b0-6224-4f9d-b61c-f41307bb6a89"}
Send a query to a conversation session๐
To send a question to an ongoing conversation session or ask follow-up questions to , send a POST request body with conversation ID and query text to the POST /api/rest/2.0/ai/conversation/{conversation_identifier}/converse API endpoint.
This API endpoint supports only the conversation sessions created using the POST /api/rest/2.0/ai/conversation/create API call.
Request parameters๐
| Parameter | Type | Description |
|---|---|---|
| Path parameter | String. Required. Specify the GUID of the conversation received from the create conversation API call. |
| Form parameter | String. Required. Specify the GUID of the data source object, for example, Model. The metadata object specified in the API request will be used as a data source for the follow-up conversation. |
| Form parameter | String. Required. Specify a natural language query string. For example, |
Example request๐
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/03f48527-b973-4efa-81fd-a8568a4f9e78/converse' \
-H 'Accept: application/json' \
-H 'accept-language: en-US', \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
"message": "Top performing products in the west coast"
}'
API response๐
If the API request is successful, the following data is sent in the API response:
-
session_identifier
GUID of the Answer session. -
generation_number
Number assigned to the Answer session. -
message_type
Type of response received for the query. For example,TSAnswer(ThoughtSpot Answer). -
visualization_type
The data format of the generated Answer, for example, a chart or table. When you download this Answer, the data will be exported in the format indicated by thevisualization_type. -
tokens
Tokens generated from the natural language search query specified in the API request. These tokens can be used as input to the/api/rest/2.0/ai/conversation/createAPI endpoint to set the context for a conversation session.
|
Note
|
Note the session ID and generation number. To export the Answer generated from this conversation, send these attributes in the |
[
{
"session_identifier": "1290f8bc-415a-4ecb-ae3b-e1daa593eb24",
"generation_number": 3,
"message_type": "TSAnswer",
"visualization_type": "Chart",
"tokens": "[sales], [state], [item type], [region] = [region].'west', sort by [sales] descending"
}
]
Ask follow-up questions๐
The API retains the context of previous queries when you send follow-up questions. To verify this, you can send another API request with a follow-up question to drill down into the data.
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/conversation/03f48527-b973-4efa-81fd-a8568a4f9e78/converse' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
"message": "which city has the better sales of jackets here?"
}'
The API retrains the context of the initial question and returns a response:
[
{
"session_identifier": "ee077665-08e1-4a9d-bfdf-7b2fe0ca5c79",
"generation_number": 3,
"message_type": "TSAnswer",
"visualization_type": "Table",
"tokens": "[sales], by [city], [state], [item type] = [item type].'jackets', [region] = [region].'west', sort by [sales] descending"
}
]
Generate a single Answer๐
To generate an Answer from a natural language search query, send a POST request to the /api/rest/2.0/ai/answer/create API endpoint. In the request body, include the query and the data source ID.
Request parameters๐
| Form parameter | Description |
|---|---|
| String. Required. Specify the string as a natural language query. For example, |
| String. Required. Specify the GUID of the data source object, for example, Model. The metadata object specified in the API request will be used as a data source for the follow-up conversation. |
Example request๐
In the following example, a query string and the model ID are included in the request body to set the context of the conversation.
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/answer/create' \
-H 'Accept: application/json' \
-H 'accept-language: en-US', \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN} \
--data-raw '{
"query": "Top performing products in the west coast",
"metadata_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca"
}'
API response๐
If the API request is successful, the following data is sent in the API response:
-
session_identifier
GUID of the Answer session. -
generation_number
Number assigned to the Answer session. -
message_typeType of response received for the query. For example,TSAnswer(ThoughtSpot Answer). -
visualization_type
The data format of the generated Answer; for example, a chart or table. When you download this Answer, the data will be exported in the format indicated by thevisualization_type. -
tokens
Tokens generated from the natural language search query specified in the API request. These tokens can be used as input to the/api/rest/2.0/ai/conversation/createendpoint to set the context for a conversation session.
|
Note
|
Note the session ID and generation number. To export the result generated from this API call, send these attributes in the |
[{
"session_identifier": "57784fa1-10fa-431d-8d82-a1657d627bbe",
"generation_number": 2,
"message_type": "TSAnswer",
"visualization_type": "Undefined",
"tokens": "[product], [region] = [region].'west', sort by [sales] descending"
}]
Conversational analytics with Spotter agent๐
Spotter agent is an advanced, agentic version of Spotter, which supports context-aware interactions, data literacy features, and follow-up conversations for deeper analytics. Spotter agent can be used for complex reasoning and agentic interactions in an orchestrated framework.
Create a conversation session with Spotter agent๐
The /api/rest/2.0/ai/agent/conversation/create API endpoint allows you to initiate a new conversation session with Spotter Agent for different data contexts, such as Answers, Liveboards, or Models.
|
Note
|
Clients must have at least view access to the objects specified in the API request to create a conversation context and use it for subsequent queries. |
Request parameters๐
To set the context for the conversation session, you must specify the metadata type and context in the POST request body. Optionally, you can also define additional parameters to refine the data context and generate precise responses.
| Form parameter | Description |
|---|---|
| Defines the data context for the conversation.
|
| Optional. Defines additional parameters for the conversation context. You can set any of the following attributes as needed:
|
Example request๐
The following example shows the request payload for the data_source context type:
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/create' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"metadata_context": {
"type": "data_source",
"data_source_context": {
"guid": "cd252e5c-b552-49a8-821d-3eadaa049cca"
}
},
"conversation_settings": {
"enable_contextual_change_analysis": false,
"enable_natural_language_answer_generation": true,
"enable_reasoning": false
}
}'
The following example shows the request payload for the liveboard context type:
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/create' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"metadata_context": {
"type": "liveboard",
"answer_context": {
"session_identifier": "c3a00fa7-fd01-4d58-8c84-0704df986d9d",
"generation_number": 2
},
"liveboard_context": {
"liveboard_identifier": "cffdc614-0214-42ba-9f57-cb6e8312fe5a",
"visualization_identifier": "da0ed3da-ce1f-4071-8876-74d551b05faf"
},
"data_source_context": {
"guid": "54beb173-d755-42e0-8f73-4d4ec768114f"
}
},
"conversation_settings": {
"enable_contextual_change_analysis": false,
"enable_natural_language_answer_generation": true,
"enable_reasoning": false
}
}'
The following example shows the request payload for the answer context type:
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/conversation/create' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"metadata_context": {
"type": "answer",
"answer_context": {
"session_identifier": "f131ca07-47e9-4f56-9e21-454120912ae1",
"generation_number": 1
},
"data_source_context": {
"guid": "cd252e5c-b552-49a8-821d-3eadaa049cca"
}
},
"conversation_settings": {
"enable_contextual_change_analysis": false,
"enable_natural_language_answer_generation": true,
"enable_reasoning": false
}
}'
API response๐
If the API request is successful, the API returns the conversation ID. You can use this ID to send follow-up questions to the conversation session.
{"conversation_id":"q9tZYf_6WnFC"}
Note the conversation ID for subsequent agentic interactions and API calls.
Send a question and generate streaming responses๐
To send queries to an ongoing conversation session with Spotter agent and receive streaming responses, use the /api/rest/2.0/ai/agent/converse/sse API endpoint. This API endpoint uses the SSE protocol to deliver data incrementally as it becomes available, rather than waiting for the entire response to be generated before sending it to the client.
The /api/rest/2.0/ai/agent/converse/sse API can be used as an integrated tool for real-time streaming of conversational interactions between agents and the ThoughtSpot backend. It enables AI agents to send user queries and receive incremental, streamed responses that can be processed and sent to users. REST clients can also send a POST request with a conversation ID and query string to fetch streaming responses.
Request parameters๐
| Parameter | Description |
|---|---|
| String. Specify the conversation ID received from the POST /api/rest/2.0/ai/agent/conversation/create API call. |
| Array of Strings. Include at least one natural language query. For example, |
Example request๐
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/converse/sse' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"conversation_identifier": "h2I_pTGaRQof",
"messages": [
"Net sales of Jackets"
]
}'
API response๐
If the API request is successful, the response includes a stream of events, each containing a partial or complete message from the AI agent, rather than a single JSON object.
Each event is a simple text-based message in a specific format, data: <your_data>\n\n; <your_data>\n\n means that each message sent from the server to the client is prefixed with data: keyword, followed by the actual payload (<your_data>), and ends with two newline characters (\n\n).
The API uses this format so that the clients can reconstruct the AI-generated response as it streams in, chunk by chunk, and show the responses in real-time. In agentic workflows, the receiving client or agent listens to the SSE stream, parses each event, and assembles the full response for its users.
Example response
data: [{"type": "ack", "node_id": "BRxCtJ-aGt8l"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": "I"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " understand"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " you're"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " interested"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " in"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " net"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " sales"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " of"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " Jackets"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": "."}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " I'll"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " retrieve"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " relevant"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " data"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " for"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": " you"}]
data: [{"id": "OJ0zMh4PVa-y", "type": "text-chunk", "group_id": "czoDDhNwwU7z", "metadata": {"format": "markdown"}, "content": "."}]
data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "metadata": {"title": "Net sales of Jackets"}, "code": "nls_start"}]
data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "code": "QH", "message": "Fetching Worksheet Data"}]
data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "code": "TML_GEN", "message": "Translating your query with the Reasoning Engine"}]
data: [{"type": "notification", "group_id": "o8dQ9SAWdtrL", "code": "ANSWER_GEN", "message": "Verifying results with the Trust Layer"}]
data: [{"id": "r24X7D99SROD", "type": "answer", "group_id": "o8dQ9SAWdtrL", "metadata": {"sage_query": "[sales] [item type] = [item type].'jackets'", "session_id": "b321b404-cbf1-4905-9b0c-b93ad4eedf89", "gen_no": 1, "transaction_id": "6874259d-13b1-478c-83cb-b3ed52628850", "generation_number": 1, "warning_details": null, "ambiguous_phrases": null, "query_intent": null, "assumptions": "You want to see the total sales amount for jackets item type.", "tml_phrases": ["[sales]", "[item type] = [item type].'jackets'"], "cached": false, "sub_queries": null, "title": "Net sales of Jackets", "worksheet_id": "cd252e5c-b552-49a8-821d-3eadaa049cca"}, "title": "Net sales of Jackets"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "The"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " net"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " Jackets"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " have"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " been"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " visual"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "ized"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " you"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "."}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " This"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " analysis"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " specifically"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " filtered"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " item"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " type"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "jackets"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "\""}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " and"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " calculated"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " total"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " amount"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " associated"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " with"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " those"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " products"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\n\n"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "**"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "Summary"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " &"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " Insights"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ":"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "**\n"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "-"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " The"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " visualization"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " shows"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " total"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " net"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " all"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " jacket"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " transactions"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " in"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " your"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " apparel"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " dataset"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\n"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "-"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " The"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " calculation"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " uses"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " only"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " amounts"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " where"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " item"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " type"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " is"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " \""}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "J"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "ackets"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\"\n"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "-"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " This"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " information"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " is"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " useful"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " for"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " understanding"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " the"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " revenue"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " contribution"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " of"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " jackets"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " within"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " your"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " product"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " mix"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ".\n\n"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "If"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " you'd"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " like"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " to"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " see"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " a"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " breakdown"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " by"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " region"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " state"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " time"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " period"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " or"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " compare"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " jacket"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " sales"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " to"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " other"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " product"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " types"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": ","}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " please"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " let"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " me"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": " know"}]
data: [{"id": "BgY16KR8nVL1", "type": "text-chunk", "group_id": "_ARJXDKbFhHF", "metadata": {"format": "markdown"}, "content": "!"}]
The messages in the API response include the following parts:
-
id
A unique identifier for the message group -
typeType of the message. Valid types are:-
ack
Confirms receipt of the request. For example, the type in the first messagedata: [{"type": "ack", "node_id": "BRxCtJ-aGt8l"}], which indicates that the server has received the clientโs request and is acknowledging it. -
text / text-chunk
Content chunks, optionally formatted. -
answer
The final structured response with metadata and analytics -
error
Indicates a failure. -
notification
Notification messages.
-
-
group_id
Groups related chunks together. -
metadata: Indicates content format, for example, markdown. -
content
The actual text content sent incrementally. For example,"I","understand","youโre","interested","in","the","net","sales", and so on.
The following example shows the response text contents for the answer message type.
[
{
"id": "r24X7D99SROD",
"type": "answer",
"group_id": "o8dQ9SAWdtrL",
"metadata": {
"sage_query": "[sales] [item type] = [item type].'jackets'",
"session_id": "b321b404-cbf1-4905-9b0c-b93ad4eedf89",
"gen_no": 1,
"transaction_id": "6874259d-13b1-478c-83cb-b3ed52628850",
"generation_number": 1,
"warning_details": null,
"ambiguous_phrases": null,
"query_intent": null,
"assumptions": "You want to see the total sales amount for jackets item type.",
"tml_phrases": [
"[sales]",
"[item type] = [item type].'jackets'"
],
"cached": false,
"sub_queries": null,
"title": "Net sales of Jackets",
"worksheet_id": "cd252e5c-b552-49a8-821d-3eadaa049cca"
},
"title": "Net sales of Jackets"
}
]
The session ID and generation number serve as the data context for the Answer. You can use this information to create a new conversation session using /api/rest/2.0/ai/agent/conversation/create, or download the answer via the /api/rest/2.0/report/answer API endpoint.
Send queries to a conversation session with Spotter agent๐
To send queries to an ongoing conversation session with the Spotter agent, use the /api/rest/2.0/ai/agent/{conversation_identifier}/converse API endpoint.
To use this API, the user must have access to the relevant conversational session and include its ID in the API request URL. The API request body must include at least one message in natural language format.
Request parameters๐
| Parameter | Type | Description |
|---|---|---|
| Path parameter | String. Required. Specify the conversation ID received from the POST /api/rest/2.0/ai/agent/conversation/create API call. |
| Form parameter | String. Required. Specify a natural language query string. For example, |
Example request๐
The following example shows the request body with the query text and the conversation ID.
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/agent/-1XZmqqMcbtm/converse' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}' \
--data-raw '{
"messages": [
"total sales of jackets"
]
}'
API response๐
If the request is successful, the API returns an array of objects in the response. The messages in the API response include the following parts:
-
type
Type of the message, such as text, answer, or error. -
message
Response message generated for the query. -
metadata
Additional information based on the message type.
The following example shows the response text contents for the answer message type.
{
"messages":[
{
"metadata":{
},
"internal":{
},
"type":"text",
"text":"Let me retrieve the total sales of jackets from the dataset.",
"agent_context":""
},
{
"metadata":{
"output":"{metadata-output}",
"worksheet_id":"cd252e5c-b552-49a8-821d-3eadaa049cca",
"assumptions":"You want to know the total sales amount for jackets specifically.",
"chart_type":"KPI",
"data_awareness_enabled":true
},
"internal":{
},
"type":"answer",
"title":"total sales of jackets",
"description":"",
"session_id":"19a5da20-28c5-4266-a3be-e1c61f122b5d",
"gen_no":1,
"sage_query":"sum [sales] [item type] = [item type].'jackets'",
"tml_tokens":[
"sum [sales]",
"[item type] = [item type].'jackets'"
],
"formulas":[
],
"subqueries":[
],
"viz_suggestion":"CAAQIBomEiQzNGRmZjA2ZS0yNTViLTQ3NjUtYmJmYi00M2EwOGEzYmI4MjkoATIA",
"ac_state":null
},
{
"metadata":{
},
"internal":{
},
"type":"text",
"text":"The total sales of jackets has already been visualized for you.\n\n**Summary & Insights:**\n- The result represents the overall sales amount specifically for the item type \"jackets\".\n- This metric is useful for understanding the revenue contribution of jackets within your apparel product portfolio.\n- You can use this figure to benchmark jacket performance against other item types, evaluate promotional effectiveness, or inform inventory decisions.\n\nIf youโd like a breakdown by region, store, or time period, just let me know!",
"agent_context":""
}
],
"__args":{
"conversation_identifier":"IrnWPL1NFc6H",
"messages":[
"total sales of jackets"
]
}
}
The session ID and generation number serve as the data context for the Answer. You can use this information to create a new conversation session using /api/rest/2.0/ai/agent/conversation/create, or download the answer via the /api/rest/2.0/report/answer API endpoint.
Process results generated from Spotter APIs๐
To export or download the Answer data generated by the Spotter APIs, use the Answer report API.
|
Note
|
Using tokens generated by the Spotter API in a Search Data API request can return invalid column errors, because these tokens may reference formulas or columns not present in the data model. Instead, use the Answer report API and include the session ID and generation number obtained from the Spotter API in your API request to retrieve the data. |
Data literacy and query assistance๐
The query assistance APIs help users who may need assistance with exploring and analyzing data effectively.
Get relevant questions๐
The /api/rest/2.0/ai/relevant-questions/ API endpoint breaks down a user-submitted query into relevant sub-questions. It accepts the original query and optional additional context, then generates a set of related questions to help users explore their data comprehensively.
During agentic interactions, this API can be used as an integrated tool to decompose user queries and suggest relevant questions for a specific data context. REST clients can also call this API directly to fetch relevant questions via a POST request.
Request parameters๐
| Parameter | Description |
|---|---|
| Required. Specify one of the following attributes to set the metadata context:
|
| String. Required parameter. Specify the query string that needs to be decomposed into smaller, analytical sub-questions. |
| Integer. Sets a limit on the number of sub-questions to return in the response. Default is 5. |
| Boolean. When set to |
| Additional context to guide the response. Define the following attributes as needed:
|
curl -X POST \
--url 'https://{ThoughtSpot-Host}/api/rest/2.0/ai/relevant-questions/' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN}'
--data-raw '{
"metadata_context": {
"data_source_identifiers": [
"cd252e5c-b552-49a8-821d-3eadaa049cca"
]
},
"query": "Net sales of Jackets in west coast",
"limit_relevant_questions": 3
}'
Example response๐
If the request is successful, the API returns a set of questions related to the query and metadata context in the relevant_questions array. Each object in the relevant_questions array contains the following fields:
-
query
A string containing the natural language (NL) sub-question. -
data_source_identifier
GUID of the data source object. -
data_source_name
Name of the associated data source object.
{
"relevant_questions": [
{
"query": "What is the trend of sales by type over time?",
"data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
"data_source_name": "(Sample) Retail - Apparel"
},
{
"query": "Sales by item",
"data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
"data_source_name": "(Sample) Retail - Apparel"
},
{
"query": "Sales across regions",
"data_source_identifier": "cd252e5c-b552-49a8-821d-3eadaa049cca",
"data_source_name": "(Sample) Retail - Apparel"
}
]
}
NL instructions to coach Spotter๐
Administrators and data owners can guide and refine how Spotter interprets and answers user questions. The natural language (NL) instructions API allows setting instructions at the data model level. The API provides business context and preferred interpretations for specific queries or terminology to coach Spotter, but it does not train or change the underlying LLM.
Set NL instructions๐
To coach and instruct the Spotter system on how to interpret queries, apply filters, select columns, handle data nuances, and present answers using the data from a specific model, you can set global rules in natural language format. Setting instructions helps Spotter generate precise and consistent responses for user queries.
To set instructions for a Model, send a POST request to the /api/rest/2.0/ai/instructions/set API endpoint.
|
Note
|
To set NL instructions, youโll need administration or data management privileges, or at least edit access to the data Model. |
Request parameters๐
| Form parameter | Description |
|---|---|
| String. ID of the Model. |
| Instructions in the natural language format.
|
Example request๐
The following example defines instructions to coach Spotter on how to interpret the query:
curl -X POST \
--url 'https://https://{ThoughtSpot-Host}/api/rest/2.0/ai/instructions/set' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN} \
--data-raw '{
"data_source_identifier": "71311827-31bb-48b2-8465-9a215adbc05d",
"nl_instructions_info": [
{
"instructions": [
"When I ask for last month, use โlast 30 daysโ as a filter.",
"Exclude orders where order_status = '\''CANCELLED-USER'\'' when calculating total revenue"
],
"scope": "GLOBAL"
}
]
}'
Example response๐
If the API request is successful, ThoughtSpot returns the {"success":true} response.
Retrieve NL instructions assigned to a Model๐
To view the NL instructions assigned for a Model, send a POST request to the /api/rest/2.0/ai/instructions/get API endpoint.
Only Spotter users with view access to the data model can retrieve instructions via API requests.
Request parameters๐
| Form parameter | Description |
|---|---|
| String. ID of the Model from which you want to fetch instructions. |
Example request๐
The following example shows the request body for retrieving NL instructions configured on a Model:
curl -X POST \
--url 'https://https://{ThoughtSpot-Host}/api/rest/2.0/ai/instructions/set' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {AUTH_TOKEN} \
--data-raw '{
"data_source_identifier": "3bbeac16-a723-4886-9eba-c4779d07fd83"
}'
Example response๐
If the instructions are configured on the Model specified in the API request, ThoughtSpot returns an array of instructions in the API response.
{
"nl_instructions_info": [
{
"instructions": [
"When I ask for last month, use โlast 30 daysโ as a filter.",
"Exclude orders where order_status = 'CANCELLED-USER' when calculating total revenue"
],
"scope": "GLOBAL"
}
]
}
Additional resources๐
-
Visit the REST API v2.0 Playground to view the API endpoints and verify the request and response workflows.
-
For information about MCP tools, see MCP server integration.