
Fabric data agents work with existing OneLake implementations, giving them a base set of data to use as context for your queries. Along with your data, they can be fine-tuned using examples or be given specific instructions to help build queries.
There are some prerequisites before you can build a data agent. The key requirement is an F64 or higher client, along with a suitable data source. This can be a lake house, a data warehouse, a set of Power BI semantic models, or a KQL database. Limiting the sources makes sense, as it reduces the risk of losing the context associated with a query and keeps the AI grounded. This helps ensure the agent uses a limited set of known query types, allowing it to turn your questions into the appropriate query.
Building AI-powered queries
The agent uses user credentials when making queries, so it only works with data the user can view. Role-based access controls are the default, keeping your data as secure as possible. Agents’ operations need to avoid leaking confidential information, especially if they’re to be embedded within more complex Azure AI Foundry agentic workflows and processes.