
This article originally appeared in our May 7th edition of the Diligent Minute Newsletter. For more insights like these, delivered straight to your inbox, subscribe here.
At our recent Elevate Leadership Summit, we brought together dozens of directors and senior executives to talk candidly about the future of governance. One question kept resurfacing: What happens when an AI system doesn’t just answer questions, but behaves like a working member of the boardroom ecosystem? The idea of “agentic AI” – AI that can listen, plan, follow through and learn over time – is no longer theoretical. For boards, that’s both a live debate and an extraordinary opportunity.
In the boardroom, information keeps multiplying, regulatory and investor expectations grow sharper, and meeting agendas are more complex than ever. But boards can’t simply add new human directors every time they need deeper expertise on emerging areas of risk like activism, cyber, geopolitics or compensation.
Enter agentic AI. Boards now have the option to add Diligent’s secure, purpose-built AI Board Member, who has the sum total of human knowledge at its disposal and can provide needed perspective and input on thorny issues facing the board. The AI Board Member ingests board packs, committee minutes, policies and past decisions, it layers on public filings, news and peer benchmarks, and then it interacts as a secure, governance‑aware colleague. Instead of a generic chatbot, directors can engage a “digital activist,” a “long‑term investor,” or a “cyber expert” to stress‑test strategies, surface red flags and run scenarios — all with sources cited and access scoped to existing permissions.
When we launched AI Board Member at the Elevate Leadership Summit, it understandably prompted spirited debate. If an AI agent is helping you model how an activist might attack your strategy, or replaying years of cyber incidents to flag patterns you’ve missed, what does that mean for the business judgment rule? How will courts weigh a decision where AI played a role in preparation? We don’t yet know. Technology has always outpaced legislation and governance frameworks; agentic AI will be no exception.
But history also tells us that each time we’ve hit an inflection point in how knowledge is created and shared, the tools that expand access to information and strengthen inquiry tend to be seen as advancing good governance over time. The focus should be on how responsibly boards use AI tools, not whether or not they should exist.
Of course, judgment still belongs to humans. Boards must set boundaries for what an agent can and cannot do, challenge its recommendations, and bring their own experience, ethics and intuition to every decision. They must understand where the models get their information, when to demand human corroboration, and how to document the fact that AI input was one data point among many — not an instruction to be followed blindly.
Here is the exciting flip side: As these tools become more capable and more widely adopted, it may become harder to defend not using them. If agentic AI can surface a critical inconsistency in your risk disclosures, or highlight an emerging governance norm your board has not yet addressed, is it really safer to avert your eyes?
In that sense, working with an AI Board Member is an invitation to expand the diversity of perspectives in the boardroom, deepen preparation, and make the invisible more visible before the vote is called. Used wisely, agentic AI won’t replace the hard work of director judgment. But it will make that judgment sharper and better documented.