Skip to content

Agentic AI: The rise of autonomous intelligence and the need for adaptive cybersecurity – Businessday NG

    The world of artificial intelligence is on the brink of its most significant evolution yet. Beyond the reactive systems we’ve known, a new paradigm is emerging: agentic AI. These are not merely sophisticated tools; they are autonomous systems capable of perceiving, reasoning, planning, acting, and learning to achieve complex goals with minimal human intervention. As a cybersecurity professional specialising in Generative AI Governance, Risk, and Compliance (GenAI GRC), I see this as both an unprecedented opportunity and a critical new frontier for risk management.

    To truly understand agentic AI, we need to differentiate it from its predecessors. Traditional AI functions on predefined instructions, reacting to direct inputs. Generative AI, while impressive in producing new content, still largely depends on specific prompts. Agentic AI, on the other hand, is proactive. It takes initiative based on its understanding of the environment and objectives, seamlessly combining content creation with autonomy and goal-oriented behaviour. These systems function through a continuous cycle, often called PRAPA: Perception, gathering data; Reasoning, interpreting information; Planning, developing strategies; Action, executing those plans; and crucially, Adaptation, refining future behaviour through continuous learning. This dynamic feedback loop means Agentic AI’s behaviour and its inherent risk profile are constantly evolving, making static GRC frameworks insufficient.

    Agentic AI is set to revolutionise sectors from autonomous vehicles and financial services to healthcare and cybersecurity itself. Its capacity to operate at “machine speed”, executing millions of operations constantly, enhances the potential impact of any errors, biases, or malicious actions. A single mistake could quickly cascade through vital systems, causing extensive damage. The true power of agentic AI lies in its ability to coordinate various AI models and external tools. It uses “backend tool calling” to gather real-time information, optimise workflows, and automate tasks by interacting with APIs and databases. While Large Language Models (LLMs) often form the core, Agentic AI provides the essential capability for LLMs to act. For complex problems, multi-agent systems—groups of specialised agents working together—are frequently employed. This extensive interconnectedness, while enabling powerful problem-solving, also increases the attack surface, creating a “supply chain” risk where security is only as strong as its weakest link.
    Agentic AI presents a new set of risks that go beyond traditional cybersecurity threats, mainly due to their inherent autonomy, ongoing learning, and adaptability. These systems are non-deterministic; they learn, evolve, and act based on dynamic inputs, making them fundamentally unpredictable. These risks can manifest as internal agentic risks, such as misconfigured or compromised agentic AI tools leading to intellectual property theft or privacy breaches. Additionally, external agentic risks include malicious actors leveraging adaptive AI to continuously develop attacks, learn from each attempt, operate without human input, and potentially bypass traditional security measures entirely. Specific vulnerabilities include memory poisoning, tool misuse, cascading hallucinations, intent breaking, and misaligned behaviours.

    Beyond the technical, agentic AI fundamentally disrupts traditional risk frameworks. When highly autonomous AI systems make independent decisions, the critical question of “who’s on the hook”—the developer, the manufacturer, the user, or the AI itself—becomes paramount. This leads to “moral crumple zones”, where humans are left bearing the blame for failures of complex AI systems they didn’t truly control. This isn’t just a technical glitch; it’s a profound legal and ethical challenge that can erode public trust and create significant liability. Addressing these demands requires clear legal frameworks, robust transparency, and tiered regulatory approaches. Furthermore, the speed at which adversarial AI can launch highly targeted, real-time evolving attacks creates a significant gap with human-paced defence mechanisms. Traditional patch cycles and response protocols are simply too slow. This imbalance requires a fundamental shift towards automated, adaptive, and real-time GRC capabilities that can match the velocity of AI-driven threats, elevating GRC beyond mere compliance to genuine cyber resilience.
    To navigate these complexities, organisations need a governance, risk, and compliance framework as dynamic and adaptive as the AI systems themselves. The “Adaptive GRC Shield” is a conceptual model designed to proactively manage the unique risks of agentic AI by moving beyond static controls to a continuous, living system. This model comprises five core components. First, proactive risk sensing involves continuous, real-time monitoring and predictive analytics to identify emerging risks before they materialise. Second, dynamic control mechanisms are needed to implement adaptive controls that can adjust in real time based on the AI’s behaviour and identified risks. Third, explainable decision pathways (XDP) ensure transparency and auditability of autonomous decisions, mandating immutable, cryptographically signed logs for every decision point and models that provide clear explanations. Fourth, continuous assurance and compliance shift from periodic checks to “always-on” assurance by automating control testing and providing real-time dashboards and alerts, ensuring adherence to evolving regulations like the EU AI Act. Fifth, human-centric oversight and collaboration position humans as essential partners and final arbiters, establishing AI ethics boards and developing human-AI collaboration frameworks where human judgement retains the final say.

    The “Adaptive GRC Shield” represents a living ecosystem of policies, technologies, and human processes that continuously learns, adapts, and evolves alongside the AI systems it governs. This paradigm shift requires significant investment in GRC technology and a cultural transformation within GRC teams towards a more agile, data-driven, and proactive posture.

    The rise of agentic AI demands a proactive and integrated approach to governance. Traditional reactive measures are demonstrably insufficient for systems that continuously learn, evolve, and act autonomously. Organisations must embrace continuous monitoring, adaptive controls, and real-time resilience to match the speed and sophistication of AI-driven threats. Crucially, the role of human-AI collaboration is vital in ensuring trust and control. Humans remain essential for making nuanced decisions, reacting to unexpected threats, and shaping security strategies. AI should aim to enhance human capabilities, not replace them. Building trust in these advanced systems requires establishing shared standards, transparent policies, and a persistent focus on securing data, identities, and outcomes. The ultimate success of agentic AI depends not only on technological progress but also on the effective sociotechnical integration of these systems with strong human oversight and ethical safeguards.

    In conclusion, agentic AI offers transformative potential for unprecedented efficiency, enhanced decision-making, and groundbreaking innovation across industries. However, realising this immense potential responsibly hinges entirely on establishing robust, adaptive, and forward-looking GRC frameworks. The “Adaptive GRC Shield” provides a comprehensive blueprint for organisations to navigate this new frontier. By proactively sensing risks, implementing dynamic controls, ensuring explainable decision pathways, maintaining continuous assurance, and prioritising human-centric oversight, organisations can ensure that autonomous AI systems serve human welfare and values while remaining under meaningful human control. The future of agentic AI is not just about what machines can do but about how we, as humans, choose to govern them.

     

    Adetunji Oludele Adebayo [Cybersecurity Professional, GenAI GRC Lead

    businessday.ng (Article Sourced Website)

    #Agentic #rise #autonomous #intelligence #adaptive #cybersecurity #Businessday