Run Agents function
Setting up the function
The runAgents() function triggers one or multiple agents, each defined by a configuration object specifying the model, prompt, parameters, and expected response format. The function sends requests directly to the foundational model endpoint and returns responses exactly as received, without transformation.
Abort controllers are supported to cancel any in-flight agent execution.
Inside the Code editor
from abort import create_abort_controller
controller = create_abort_controller()
await sa.runAgents({
"message": "what is the weather?",
"model": 408,
"model_params": {},
"response_type": "text"
}, controller.signal)
controller.abort() # cancelInside the Web component
const controller = new AbortController();
SA_SDK.runAgents({
message: "what is the weather?",
model: 408,
model_params: {},
response_type: "text"
}, controller.signal);
controller.abort(); // cancelRequest Object Structure
Each agent configuration object may contain the following keys.
Core Fields
| Key | Type | Description |
|---|---|---|
| message | str | User input message. |
| model | int | Model ID to execute. |
| system_instructions | str | System-level instructions prepended to the request. |
| model_params | dict | Overrides for the model's configuration parameters. |
| prompt | str | Alternate prompt field for AI execution. |
| response_type | ResponseType | Expected response type (text, json, image, etc.). |
| json_schema | dict | Schema used when expecting a structured JSON response. |
| stream | bool | Whether the response should be streamed. |
| files | List[str] | URLs or base64-encoded files attached to the request. |
| drop_params | bool | If true, request parameters not defined in the model config are removed before sending. |
All fields are optional except model and at least one of message or prompt, depending on usage.
model_params Structure
The model_params key allows overriding the default configuration of the selected model.
Supported keys
| Key | Type | Description | ||
|---|---|---|---|---|
| timeout | int | Request timeout in milliseconds. | ||
| verbosity | `"low" | "medium" | "high"` | Controls response verbosity. |
| max_retries | int | Number of retry attempts for failed requests. | ||
| reasoning_effort | `"none" | "medium" | "high"` | Level of reasoning effort used by the model. |
| temperature | float | Sampling temperature (0 to 1+). | ||
| max tokens | int | Maximum tokens the model may generate. |
"model_params": {
"timeout": 700,
"verbosity": "medium",
"max_retries": 5,
"reasoning_effort": "none",
"temperature": 0.7,
"max tokens": 3500
}Behavior
- Accepts a single request object or an array of multiple agents, each executed independently.
- Returns the underlying foundational-model response as-is, with no formatting, merging, or post-processing.
- Aborting via
AbortController(Web) orcreate_abort_controller(Python) immediately cancels all pending executions tied to the signal. - When
stream=true, the caller receives streaming chunks exactly in the form provided by the foundational endpoint.
Updated about 2 hours ago