Configure an AI model and agent
Before you can incorporate AI into your application, you must select one or more AI models. If you want to use AI to perform activities or provide a chatbot, you must also configure an AI agent.
Choose an AI model
You have a choice of Large Language Models (LLMs) that you can use to power the AI features in your application. Several popular LLMs are available out-of-the-box with your Grexx Studio, including models by Anthropic, DeepSeek, Google, and OpenAI. You can also use an LLM managed by Grexx.
You can add multiple models to your Studio and use different models for different use cases. For example, you might use one model to power the AI agent used in a chatbot, and a different model to summarize the contents of form fields.
When you use AI in your application, data from your application is shared with the provider of the LLM. This includes any information provided by the user (such as when interacting with a chatbot or asking an AI agent to translate data in a form). For more information about how that data is stored and used, refer to the LLM provider's terms of service.
To add a model to your Studio so that it is available to use with AI features:
- From your Studio navigate to Agents > Models.
- Click Add model.
- Enter a name for the model. You can use this to identify the purpose of the model (such as "user assistance" or "advanced reasoning").
- From the Model picklist select the LLM you want to use. You can choose LLMs from various public providers. Alternatively, select grexx-ai to use an LLM managed by Grexx. When you use grexx-ai, data from your application is not shared with third parties and is not used to train models.
- Optionally use the Model Parameters to adjust the behavior of the model. If a user asks the AI agent to adjust the generated output (for example, to increase the length of the response), the model will work within the parameters that you have set here.
The available parameters vary according to the model selected. For more information about a particular parameter, refer to the model provider's API documentation.- Temperature: Controls the randomness of the generated response. Lower values result in a more deterministic response, which can be useful when generating a summary or providing instructions. Higher values result in a more random output, which can be useful when generating ideas.
- Frequency penalty: Controls how the model handles repeated words and phrases. Lower values mean repetition is not penalized (and therefore the same words and phrases may appear multiple times). To reduce repetition in the generated response, increase the value.
- Presence penalty: Controls the extent to which the model reuses words and phrases, regardless of how frequently they appear. Lower values encourage reuse, while higher values result in a more varied vocabulary.
- Max tokens: Controls the length of the generated response. A token is the smallest unit of text processed by the model, which may be a word or part of a word. Smaller values result in shorter output, while larger values result in longer responses.
- Top P: Controls the diversity of the generated response. Lower values reduce the number of tokens (words) that the model can choose from as it generates a response, while higher values increase the range of options.
- Click Submit. The model is added to the list and can be selected when creating an AI agent (as described below) or when using AI in form logic.
You can edit existing models, including the name and the selected LLM. This is useful if you want to change the underlying LLM (for example to move to a cheaper or more powerful model) without having to update all the places in which you use that model.
In future, you will also be able to configure your own LLMs for use with your application. This is useful if you want to use your own customer-specific key from an external provider or a custom model hosted either by a provider or on your own infrastructure.
Configure an AI agent
If you want to use AI to discuss a specific case, perform activities or power a chatbot, you need to create an AI agent. When you create an AI agent, you specify the AI model (LLM) you want to use, the prompt that it should follow, and the degree of access it has to your application, among other things. Once you have created an agent, you can grant it permission to perform activities and view data.
You can create different AI agents to suit different use cases. For example, you might want to create an agent with access only to the current user's case and their orders for a chatbot on your homepage, but create an agent using a more powerful LLM and grant it wider access to your application data so that it can analyze data and perform tasks autonomously.
To create an AI agent:
- From your Studio, navigate to Agents > Agents and click Create Agent.
- Enter a Name to identify the agent in the Studio.
- From the Model picklist, select the LLM you want to use.
- Use the Prompt and Context fields, to provide instructions for the agent:
- Use the Prompt field to provide the agent with general instructions on how it should behave, its purpose, and its name. You can also include template functions. For example, you might use
user.Current userto reference the user's name so you can tell the agent how to address the user. - Use the Context field to provide the agent with information so that it can respond to requests more accurately. You can include template functions to incorporate data from your application. For example, if you want to use the chatbot to answer customer queries, you might direct the agent to the current user's purchase history.
- The contents of both fields are combined into an initial prompt for the AI model. If the contents of the Context field changes during an interaction (for example, because the values returned by template functions change), this is sent to the AI model as a further prompt. For more information about prompts, refer to the LLM provider's documentation.
- Use the Prompt field to provide the agent with general instructions on how it should behave, its purpose, and its name. You can also include template functions. For example, you might use
- Optionally configure the Context switching and Web options, as described below. Alternatively, you can edit these settings later. When you are ready, click Create. The agent is listed.
Context switching
An AI agent always operates in the context of a specific case. By default, an AI agent is restricted to the context of its initial case. The initial case is either the case from which the agent activity was performed, or a case specified as an input to the agent activity. This could be a particular support ticket case or order case, or a page, such as the homepage.
If an AI agent is restricted to the context of the initial case, the agent can only perform activities (for which it has been granted rights) on that case. Furthermore, any template functions in the Prompt or Context fields are resolved in the context of that case.
In some situations you may want an AI agent to be able to switch to the context of other cases in your application. When an agent switches to the context of another case, it can perform activities on that case. From the Settings tab for the AI agent, select the appropriate Context switching option:
- Not allowed: Prevents the agent from switching to the context of another case. This is the default behavior.
- To same attribute: Allows the agent to switch to any case with the same value in a particular platform attribute. For example, if you have created a platform attribute for
Customer ID, you might want to allow the AI agent to switch to the context of other cases that have the sameCustomer IDvalue, such as any otherOrdercases for that customer. - To any case: Allows the agent to switch to the context of any case identified in an attribute on the current case, a global case (specified in the agent activity), or a dataset to which the agent has been granted rights.
For more information about giving agents access to cases, see Use AI agents to perform activities.
Web search
To allow the AI agent to make calls to a search engine to inform its behavior, select the agent, open the Settings tab, and enable Allow web search. This is useful if you want to allow the agent to access real-time web results (not just the data already included in the model). You can use the Prompt field to give the agent further instructions on how it should perform web searches, including any restrictions on what can be included in a search query.
Note that:
- The exact behavior, including the number of calls made and the data included in the search query, depends on the AI model (LLM).
- Depending on the prompt and the user's request (where applicable), allowing web search can result in sensitive data being sent to the search engine.
Allow AI agents to redirect to other agents
In some cases, you may want to give an AI agent the option to redirect a request to another AI agent. For example, by default you may want to use an agent that uses a local LLM without the option to search the internet. However, if the agent is unable to answer a user's question, you may want to switch to another agent that uses a different LLM and/or that has a different prompt and permissions.
You can achieve this by adding a redirect to an agent:
- From your Studio, navigate to Agents and select an existing agent.
- Select the Tools tab and click Create agent connection.
- Enter a name to identify the connection.
- From the Agent to connect to picklist, select the AI agent that you want the current agent to redirect to as required.
- From the Tool type picklist, select Redirect to agent.
- To allow the first agent to instruct the second agent to return the conversation to the first agent, enable Allow redirect back.
- Click Submit. The redirect capability is added to the agent.
Note that when one AI agent redirects to another in order to perform an activity or respond to a user, the second agent is granted permissions to the same cases and activities as the first agent.
Add AI agents to roles
You can grant AI agents rights to perform activities and rights to access datasets. You grant an agent rights by adding it to one or more roles and then granting those roles the right to perform an activity or access a dataset. This is very similar to the way in which you grant rights to users.
You can choose to create dedicated agent roles or you can create roles that can be assigned to both users and agents. To create a role for an AI agent:
- Navigate to Roles > Role names.
- Create a new role name (or edit an existing role name) and enable it for Agents.
- Create a platform or casetype role using the agent role name.
- From the Agents picklist, select the AI agents that you want to add to the role.
For more information about creating role names, platform roles, and casetype roles, see Configure user roles.
Once you have created one or more agent roles, you can grant those roles the right to perform activities and view datasets. Note that:
- Agents can only perform activities on cases to which they have access. For more information, see Use AI agents to perform activities.
- When you grant an agent rights to view a dataset, the agent can view all columns for all cases (or tasks) in the dataset, regardless of whether they have access to the case. For more information about adding columns to datasets, see Configure datasets.
If you are using the RESTful API, you can grant rights to AI agent roles to view forms and/or datasets in the same way as normal user roles.
Next steps
Once you have configured an AI agent, you can use an agent activity to instantiate the agent for a particular user and case. Alternatively, you can use the agent to power a chatbot (which will instantiate the agent in the background).