Red Hat on Tuesday took a significant step toward integrating artificial intelligence into enterprise IT automation. The company announced the general availability of its Model Context Protocol (MCP) server for the Ansible Automation Platform, allowing external AI agents to connect and interact with Ansible. At the same time, Red Hat introduced a new automation orchestrator, currently in technology preview, that routes all AI-generated actions through pre-approved, deterministic playbooks. The goal is to give enterprises the power of AI-driven automation while maintaining strict control over what those AI agents can actually do.
The announcement comes amid growing concerns about AI agents making unauthorized changes to production systems. Recent high-profile incidents have highlighted the risks of granting AI unchecked access to critical infrastructure, including reports of companies losing entire databases due to unconstrained AI actions. Red Hat's approach aims to mitigate those risks by ensuring that AI recommendations are funneled through tested, human-verified playbooks before any change is executed.
Ansible, which Red Hat acquired in 2015, has long been a staple in IT automation, used for configuration management, application deployment, and task orchestration. The platform is known for its simplicity, using YAML-based playbooks that are easy to read and write. With the addition of AI capabilities, Red Hat seeks to broaden the platform's appeal to less technical users who can now request automations in natural language rather than writing code.
The MCP server, which is now generally available, is a key enabler of this integration. MCP is an open protocol developed by Anthropic that standardizes how AI applications interact with external tools and data sources. By making Ansible MCP-compliant, Red Hat allows any AI agent—whether from OpenAI, Google, Anthropic, or any OpenAI API-compatible model—to access the Ansible Automation Platform. This opens the door to a wide range of use cases, from AI-assisted troubleshooting to compliance remediation and developer self-service.
Red Hat also expanded the set of AI models supported by Ansible. Previously, the platform only integrated with IBM's WatsonX Code Assistant. Now, administrators can plug in models from Google, Anthropic, OpenAI, and others. Additionally, enterprises can feed their own contextual knowledge into the platform through RAG (Retrieval-Augmented Generation) embedding. This allows Ansible to read and apply an organization's specific policies, maintenance windows, and infrastructure rules, making the AI more relevant and accurate.
Sathish Balakrishnan, vice president and general manager of the Ansible business unit at Red Hat, emphasized the importance of keeping AI on a short leash. 'AI is unpredictable,' he said. 'When you suddenly put AI into your production environment and ask it to change it, you've seen the articles about how a company lost its database.' Instead, Red Hat designed the new orchestrator to rely on pre-made, tested, and approved playbooks. If the AI proposes an action not covered by existing playbooks, a human is brought into the loop to verify the recommendation before it can be executed.
This approach not only enhances safety but also reduces costs. Calling a large language model for every minor automation task—such as patching a machine—is unnecessarily expensive. 'Why would you use AI just to patch a machine?' Balakrishnan asked. 'We all know tokens are expensive. We know the best way to patch a machine—why call an AI to do that when you already have a playbook that's been in use for ten years?' The deterministic playbooks are testable, repeatable, and far cheaper to execute than LLM invocations.
Industry analysts have weighed in on the move, largely praising the cautious approach but also warning of the inherent risks. Paul Nashawaty, an analyst at Efficiently Connected, noted that the safety controls are vital. 'The security concerns are very real,' he said. 'If those agents are connected to highly privileged automation systems, the blast radius can become enormous, including accidental production outages or destructive actions.' He recommended that companies avoid giving AI unrestricted production access, broad admin privileges, or autonomous control over critical systems.
Nashawaty identified the strongest initial use cases for AI in automation: AI-assisted troubleshooting, compliance remediation, developer self-service, and human-approved workflow execution. 'That means we'll see developers asking for environments in natural language, or AI systems automatically correlating alerts and suggesting fixes,' he predicted. 'Operations teams can reduce incident response times by having AI assemble and execute approved remediation steps.'
IDC analyst Jevin Jensen said he has been waiting for vendors to provide natural-language front ends for their platforms for the past 18 months. 'This really broadens the use and value of the platform to new users and improves efficiency of existing users,' he noted. However, he stressed the importance of good governance. 'It is important—with or without MCP—that enterprises properly utilize and leverage role-based access control to reduce risk.' Jensen recommended starting with development environments or less impactful cloud areas before rolling out AI-driven automation to production.
In addition to the AI announcements, Red Hat introduced other new features for Ansible. Administrators can now delegate the ability to trigger automations to end users—for example, allowing factory floor managers to initiate updates at times that minimize interference with manufacturing schedules. Red Hat also simplified event-driven automation by enabling multiple events to trigger the same playbook, eliminating the need for separate playbooks for each event. These features further improve the platform's flexibility and usability.
Red Hat’s MCP server for Ansible is now generally available, and the new orchestrator is in technology preview. Both are designed to help enterprises adopt AI-powered automation without sacrificing control or security. As AI agents become more capable, the need for robust guardrails will only grow, and Red Hat’s approach represents a pragmatic step forward.
Source: Network World News