Microsoft Azure has been at the heart of Microsoft’s AI ambitions for many years now. It began with making the deep learning products of Microsoft Research available as Azure Cognitive Services. Then Microsoft added tools to roll your own cloud-hosted machine learning, using Azure to train models and host the resulting services. Now Azure is the home for Microsoft’s growing family of Copilots, which both build on Azure OpenAI’s generative AI models and give customers access to those same models.
Supporting all of these tools, plus providing a framework for customizing cloud service models, required Azure to provide more than one development environment. The result was, to say the least, complex and hard to understand. Fortunately, the Azure AI team has been working on a replacement, Azure AI Studio, that unifies Azure’s AI development tools, building on responsible AI concepts and supporting a mix of pre-defined and custom AI models.
The development of Azure AI Studio involves a fundamental change in the way we use AI models. Instead of simply making an API call to a single model, we’re now building pipelines that mix different aspects of a model, or even chaining different models to deliver a multimodal application. Tools like LangChain, Semantic Kernel, and Prompt Flow are now essential frameworks for taming and controlling the output of generative AI, grounding it in our own data.
For example, we can have a computer vision application that identifies objects in a picture, feeding that list into a generative AI large language model to produce a text description of the image, before using a voice generator to read that description to a visually impaired user holding a camera.
Introducing Azure AI Studio
As a result, Microsoft is bringing its various Azure AI development tools into one new environment, Azure AI Studio. Introduced in a public preview at Ignite 2023, Azure AI Studio is, for now, focused on building Copilots, Microsoft’s name for generative AI-powered applications. AI Studio includes support for mixed-model multi-modal tools, and for the Azure AI SDK. The overall aim is to allow you to experiment inside the Studio before building your refined model into a production service.
While Azure AI Studio is in public preview, using Azure OpenAI models in your application requires approval from Microsoft. You will need to be working on a project for an approved enterprise customer, which requires you to be working directly with a Microsoft account team. You will also need to have a specific use case for your project, as this will be used to scope access to the service for both you and your users. For example, if your application will use sensitive data, you will likely be required to limit your application to internal users on secured internal networks.
There’s no need to create a new resource to work with Azure AI Studio—it’s a standalone service that sits outside the Azure Portal. Simply log in with an Azure account to start working. AI Studio opens to an introductory home screen that gives you access to a catalog of models, as well as the Azure OpenAI service. Other options provide links to the familiar Cognitive Services APIs, and to content safety tools that help you reduce the risk of including unsuitable materials in training data or in the prompts used in an AI-powered application.
There are four tabs in Azure AI Studio: Home, Explore, Build, and Manage. On the Home tab, in addition to the links to the rest of the service, you’ll see a number of sample projects that are hosted on GitHub. These will give you the necessary scaffolding to start building your own code,. One sample shows you how to build an Azure AI-powered Copilot, and another shows you how to mix different AI services to build a multi-modal application.
Building AI applications in Azure AI Studio
Getting started is simple enough. You begin by creating an AI-specific resource to manage the VMs and services used for your application. Azure AI Studio walks you through a familiar Azure set-up wizard, creating this resource and its AI services. Interestingly the default includes the renamed Azure Cognitive Search, now called Azure AI Search. This is an interesting choice, as it indicates Microsoft is taking an opinionated approach to AI application architectures, requiring an external setting of embeddings to ground your application and reduce the risk of “hallucinations” due to prompt overruns.
You can now add an AI model to your Azure AI Studio instance, for example using an Azure OpenAI generative AI model. This is added to the resource group you’re using for your AI application, ensuring that you’re controlling network access to avoid unauthorized access to your API. This lets you lock access down to a specific VNet, so the only access comes from your application. For even more control, you can disable public network access completely, creating private endpoints on specific subnets.
There’s a large catalog of available models. You’re not limited to OpenAI models, there’s support for Meta’s Llama, open-source models on Hugging Face, Nvidia’s collection of foundation models, and Microsoft Research models. You can choose models directly or use a list of inference tasks to pick and choose the model that’s right for your project. Usefully the catalog is interactive, and you can try out basic interactions before deploying a model into a project.
Building an AI-powered application in Azure AI Studio can be quite simple. Once you’ve created a deployment and selected your choice of model, it’s ready to start using. There’s a simple playground you can use to test out prompts and model operation, for example looking at completions or running an AI-driven chat session. Initially you won’t be using the model with your own data, so it will only give you generic answers.
Once you’re satisfied with your basic prompts and the performance of the model you’re using, you can start to modify its behavior by adding data. Data sources can be uploaded files, Azure Blob storage, or an Azure AI Search index. This last option allows you to quickly bring in a pre-processed vector index, which will increase accuracy and speed. Files can include PowerPoint, Word, PDF, HTML, Markdown, and raw text. New data will be indexed by Azure AI Search, ready to ground your AI model.
Azure AI Studio keeps you notified of costs at all steps of the process, so you can make informed decisions about what features to enable. This includes whether to use vector search or not. Once the data has been ingested, you can use the playground to test your model’s responses again, ensuring that they are now grounded.
The model can now be deployed as a web app for further testing, adding authentication for other tenant users via Entra ID. At this point you can export the playground contents to Prompt Flow for additional development.
Chaining models, prompts, and APIs with Prompt Flow
Prompt Flow is Azure AI Studio’s tool for chaining models, prompts, and APIs to build complex AI-powered applications. It gives you the tools to manage system-level prompts, user input, and services, using them as part of a flow, much like those built in Semantic Kernel or LangChain.
Prompt Flow gives you a visual view of the elements of your application, and how each step feeds into the next, allowing you to construct and debug Copilot-like services by linking nodes that perform specific functions. These can include Python, allowing you to bring in data science tools. While you can build your own flows from scratch, Prompt Flow comes with a set of basic templates that provide the necessary scaffolding for further development. These include scaffolds for building long chats with a conversation memory.
Using Prompt Flow allows you to work in both Azure AI Studio and in Visual Studio Code, giving you your choice of development environment. Using a code-based approach loses the visual flow graph, with connections and flow elements defined in YAML. However, the Prompt Flow extension for VS Code not only allows you to work with the code of your flow contents, but gives you a visual editor and a view of your flow graph.
Azure AI Studio is still in preview, but it’s already offering an interestingly opinionated take on AI application development. Microsoft’s collection of AI tools show that the company has adopted generative AI wholesale, and incorporate the lessons it has learned in producing trustworthy Copilots. The result promises to be a fast path to bringing generative AI to your applications and data.