Agent Hub Overview
The Agent Hub is where you can connect various kinds of models and configure them to work with your project annotations. You can set up a model’s credentials based on the provider, and use those credentials to connect it to your team.
Models connected this way can be used in Multimodal forms either while annotating or through Explore.
Connect a model
To connect a model:
- From the left panel, go to the Agent Hub.
- In the Models tab, click + Connect.
- Type the name of your model connection.
- Type the description of your model connection (optional).
- Select a provider.
- Select the Model from the list available by the provider. Depending on the provider, you might have to enter the model’s name or ID manually.
- Select the model’s Credentials.
- Under Instructions, type the system prompt that the model should operate on to define its behavior, personality, tone, and approach to tasks.
- Define the following information depending on what’s available for the selected provider:
- Select the model’s Reasoning effort: Minimal, Low, Medium, or High. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response. Increasing reasoning effort can result in slower, but higher quality responses, using more tokens.
- Select the model’s Verbosity: Low, Medium, or High. Lower levels yield shorter answers.
- Select the model’s Temperature to control its randomness. Lowering the temperature results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.
- Enter the Max tokens to define the total number of words and punctuation used.
- Enter the Timeout in seconds to define how long the request is pending before it gets canceled. This ensures that it doesn’t wait for the model’s response indefinitely.
- Enter the Max retries to define how many times the system will retry a request if it fails due to timeouts, rate limits, connection limits, or other temporary issues.
- Choose the model’s response format, if this option is available for the selected provider.:
- Text: The response will be delivered in a text format. (Default)
- Structured Output: The response will be delivered based on a JSON schema. You can provide the schema to define the structure of the model’s response format yourself. The schema must always include the mandatory keys: name, description, type (which must always be “object”), and properties. When you’ve made your changes, you can apply them by clicking Save.
- Add a function under Tools, if this option is available for the selected provider. The model can then make use of these tools based on the input received from the user. You may have up to 20 functions in each connection at a time.
- Under Custom configuration, click the Edit icon to specify your model’s parameter within a JSON schema (Fireworks AI, Vertex AI, Databricks, and AWS Bedrock only).
- When you’re done, click Connect.
Custom configuration
When connecting certain models, you can flexibly configure them by providing specific parameters in a JSON schema. This can be found under the Custom configuration section.
Provide the connection details for your model as a JSON, following the OpenAI Chat Completions API request body format. If left blank, the model will be connected using its default parameters.
You cannot add the model
and messages
keys.
JSON schema example
{
"temperature": 0.7,
"top_p": 0.9,
"max_tokens": 512,
"presence_penalty": 0.2,
"frequency_penalty": 0.1,
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "contact_schema",
"schema": {
"type": "object",
"properties": {
"full_name": {
"type": "string"
},
"email": {
"type": "string",
"format": "email"
},
"phone": {
"type": "string"
}
},
"required": [
"full_name"
]
}
}
}
}
Agent ID
To see and copy your Agent’s ID:
- In the Agent Hub, find your connection.
- Click the three dots
â‹®
. - You can see the Agent ID at the top of the list. Click the Copy icon.
Validate connection
To validate your model connection:
- In the Agent Hub, find your connection.
- Click the three dots
â‹®
. - Select Check connection.
Validating your connectionsIf there are any issues with the connection’s credentials (i.e. your tokens were exhausted), validation will fail.
Response Format
When you call your AI model from a Pipeline via SuperAnnotate, we forward your request to the configured AI model, then return the response in a consistent JSON format so it’s easy to handle in your applications.
Each response contains the following fields:
Field | Type | Description |
---|---|---|
content | string / object / null | The man response content. Can be plain text, structured JSON, or null if a tool call is required. |
response_type | string | Indicates the response format: "text" for plain text, "json_schema" for structured JSON and tool calls. |
timestamp | string (ISO) | The exact time the response was generated (UTC). |
model_used | string | The underlying AI model that produced the response. |
agent_used | string | The name of the agent used (e.g., "My_Chatgpt_Agent" ). |
tool_calls | array | Optional. Lists function response if called. |
tokens_used | number | Number of tokens consumed in generating the response. |
Text
Example of a response where response format is text.
{
"content": "SuperAnnotate is consistently ranked the #1 Data Labeling platform on G2, driven by passionate customers and concrete results.",
"response_type": "text",
"timestamp": "2025-08-26T06:15:53.372046Z",
"model_used": "gpt-5",
"agent_used": "Chatgpt",
"tool_calls": [],
"tokens_used": 99
}
JSON schema
Use this checklist to make sure your JSON passes standard validation rules:
- Required fields present:
name
,description
,type
, andproperties
are all required. - Strings: Both
name
anddescription
must be strings (e.g., "project_name
"). - Type key: Type value must be
"object"
at the root level. - Properties: Must be an object containing at least one field.
- Each field in
properties
must be:- Either an object with
type
defined, OR - A Boolean (true/false).
- Either an object with
- Unique keys: No duplicate property names allowed.
{
"name": "Book Metadata",
"description": "Provide the book's title to receive its metadata",
"type": "object",
"properties": {
"title": {
"type": "string"
},
"authors": {
"type": "array",
"items": {
"type": "string"
}
},
"abstract": {
"type": "string"
},
"keywords": {
"type": "array",
"items": {
"type": "string"
}
}
},
"required": [
"title",
"authors",
"abstract",
"keywords"
]
}
Example response of a structured output:
{
"content": {
"title": "The Lean Startup",
"authors": [
"Eric Ries"
],
"abstract": "The Lean Startup is a methodology for developing businesses and products with maximum capital efficiency and validated learning. It advocates iterating rapidly through Build-Measure-Learn cycles, focusing on minimum viable products (MVPs), actionable metrics, and continuous deployment to reduce waste and increase the odds of product-market fit.",
"keywords": [
"Lean Startup",
"MVP",
"Build-Measure-Learn",
"validated learning",
"pivot",
"innovation accounting",
"continuous deployment",
"startup methodology"
]
},
"response_type": "json_schema",
"timestamp": "2025-08-26T06:11:16.084436Z",
"model_used": "gpt-5",
"agent_used": "My_Chatgpt_Agent",
"tool_calls": [],
"tokens_used": 331
}
Tools
This JSON object corresponds to the tools argument in the OpenAI API. Currently, functions are the only supported tool type.
The tool choice is always set on “auto”, which lets the model choose when to call the function.
Each tool definition must include the following mandatory keys:
name
– A unique name for the tool.description
– A short explanation of the tool's purpose.
We recommend also including the parameters
key to define the input schema. Providing this key ensures the function can return a non-empty object when called.
An example of a function:
{
"name": "get_weather",
"description": "Determine weather in my location",
"strict": true,
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": [
"c",
"f"
]
}
},
"additionalProperties": false,
"required": [
"location",
"unit"
]
}
}
Example response when a tool call is made:
{
"content": null,
"response_type": "json_schema",
"timestamp": "2025-08-26T05:47:51.965671Z",
"model_used": "gpt-5",
"agent_used": "My_Chatgpt_Agent",
"tool_calls": [
{
"name": "get_weather",
"content": {
"unit": "f",
"location": "San Francisco, CA"
}
}
],
"tokens_used": 239
}
Edit connection
To edit your model connection:
- In the Agent Hub, find your connection.
- Click the three dots
â‹®
. - Select Edit.
- Make your changes accordingly. You can edit everything except for the Provider.
- Click Save.
Editing your connections
- If there are any issues with the connection’s credentials (i.e. your tokens were exhausted), editing will not take effect.
- After editing your connection, the connection model and parameters will be automatically updated across the platform wherever the model is being used.
Duplicate connection
To duplicate your model connection:
- In the Agent Hub, find your connection.
- Click the three dots
â‹®
. - Select Duplicate.
- Type the name of the connection duplicate.
- Click Duplicate.
Duplicating your connectionsYou cannot duplicate a connection if you’ve already reached the maximum of 200 per team. If there are any issues with the connection’s credentials (i.e. your tokens were exhausted), duplication will not take effect.
Delete connection
To delete your model connection:
- In the Agent Hub, find your connection.
- Click the three dots
â‹®
. - Select Delete.
- In the pop-up, click Delete.
Keep in mindAny projects using this connection will be affected and may become invalid.
Updated 11 days ago