Lead the AI era of GRC at Elevate 2026 — Join us April 22–24 in Atlanta Register nowarrow_forward
Diligent Logo
Diligent Logo
Products
arrow_drop_down
Solutions
arrow_drop_down
Resources
arrow_drop_down
Diligent AI

From reactive to predictive: Board governance in the AI age

March 11, 2026
1 min read

Hosted by:

Nithya B. Das

Nithya B. Das

General Manager, Governance and Chief Legal Officer

With Guests:

Elena Hera

Elena Hera

Partner at Goodwin, Public Company Advisory
Kaitlin Betancourt Headshot

Kaitlin Betancourt

Partner at Goodwin, Data, Privacy & Cybersecurity

Also in this episode: What “duty of care” means in an AI‑enabled enterprise and how boards can move from episodic, reactive oversight to more predictive, data‑driven governance.How to assess material AI use cases—from customer‑facing tools to risk, compliance, and financial reporting—and calibrate board attention accordingly.Core components of an AI governance framework, including acceptable‑use policies, ownership and decision rights, “human in the loop” safeguards, vendor due diligence, and integration with enterprise risk management.

Intro/Outro:

Welcome to the Corporate Director Podcast, where we discuss the experiences and ideas behind what's working in corporate board governance in our digital tech field world. Here you'll discover new insights from corporate leaders and governance researchers with compelling stories about corporate governance, strategy, board culture, risk management, digital transformation, and more.

Dottie Schindlinger: Hi everybody and welcome back to the Corporate Director Podcast, the Voice of Modern Governance. My name is Dottie Schindlinger, executive Director of The Diligent Institute, and I'm joined once again by my co-host extraordinaire Meghan Day strategy leader here at Diligent. Meghan. How are you doing today?


Meghan Day: Greetings, Dottie from under two feet of snow here in New York City. But, awarding our listeners if you hear the pitter-patter of little feet. My daughter is home from school today, so we may have an extra special guest on corporate governance today.


Dottie Schindlinger: I'm delighted to have our extra special guest. I wonder what she thinks of corporate governance.


Meghan Day: I'm assuming it sounds, something like a raspberry noise. Yes.


Dottie Schindlinger: Well listen, Meghan and I are gonna forgo a little bit of our regular back and forth because we have a special episode for you all today. You know, we've been getting lots of questions and having lots of conversations with directors and general counsel and others about using AI in governance settings, and it was enough.


Questions that we started to think maybe we should do a special episode just on this question. And so we invited Nithya Das, who is our general manager of the governance business unit, and she's also diligence chief legal officer to bring in two of her former colleagues, Elena Hara and Kaitlyn Bettencourt, who are both partners at Goodwin to talk about this very issue.


So we're gonna just basically turn over to that interview and give it a listen, and then maybe we can come back and reflect a little bit on what we heard.


Meghan Day: Sounds great. It's such an important topic and I think we will really benefit from this level of expertise.


Dottie Schindlinger: Awesome.


Let's give it a listen.


Nithya Das: Hi, my name is Nithya Das, general manager of the Governance Business and Chief Legal Officer at Diligent, and I'm so excited to be your special guest host for this interview. Joining us on the Corporate Director Podcast today are Elena Herra, partner at Goodwin in the public company, advisory Capital Markets and ESG and Impact Practice Area and Kaitlyn Betancourt partnering Goodwin's data privacy and cybersecurity practice, and a member of the firm's complex litigation and Dispute resolution group.


Elena Hera: I'm Elena. I'm a partner in Goodwin's, New York office. I help directors navigate emerging issues that reshape governance expectations from sustainability oversight to more recently ai and I tried to guide boards in translating these evolving risks into defensible processes that met fiduciary duties.


Kaitlin Betancourt: Pleased to be with you here today. I am Kaitlin Bettencourt. I'm also a partner in Goodwin's, New York office, and my practice focuses on cybersecurity and AI governance and risk management. So from a cybersecurity perspective, I assist clients sort of end to end with proactive work. So establishing cyber.


Cybersecurity Resiliency and readiness. So making sure that they have a holistic incident response program, for example, associated policies and procedures, and they know how to operationalize cyber readiness. And then on the AI governance side of things, it's. Somewhat similar. I assist clients with establishing a holistic AI governance framework, which, really helps 'em sort of get their arms around their use of AI and the risks presented by the usage of AI within their company and then the, the companies with which they do business.


Nithya Das: Awesome. Well, as you can imagine, our clients, our customers have AI on the brain, and so it's great to hear about the work that the two of you are doing in your background at Goodwin, because I think it's gonna be really informative for this discussion. I'd say there's increasingly questions about some of the legal considerations that corporate leaders and board directors should keep in mind when it comes to using ai.


This is especially on the increase with, in terms of what a safe application of AI look like, particularly with increasing questions around discoverability and retention and what all of those correct guardrails are that you have in place as you use ai. So when directors think about their duty of care in the context of using ai, what does reasonably informed oversight look like and how does this play out in the boardroom?


Elena Hera: I'll take this one, as you mentioned, is definitely on the top of mind in every boardroom, and I would say directors across the spectrum, from what I call data curious to more technologically sophisticated are all asking the same fundamental question. How can we meet our fiduciary duties? An AI enabled enterprise, and what practical steps should we require of management to ensure that our over oversight is effective, proportionate, and legally defensible?


To your point, what does reasonably informed oversight look like and the context of ai? I would highlight the number of points first. Zoe's materiality AI tools and uses that are mission critical. That is those that materially affect customers. Financial reporting, regulatory compliance, safety, strategic outcomes, or the company's public disclosures.


Warrant sustained board attention. And here I would note that materiality shouldn't be static. Directors should periodically reassess. As use cases expand and evolved in the company. Second, the demonstrable and documented controls rather than high level management assurances about responsible AI use for material AI uses the board.


Should expect to see documented policies clearly assigned ownership and decision rights defined human in the loop safeguards, vendor due diligence and contractual provisions, incident response plan among others. And I think directors should request like concrete examples that illustrate these controls are actually operating in practice and they're not just form over function.


And you know, I would say. This documentation and controls ecosystem is basically the basis for the board's reasonable reliance on management's recommend representations. Another point to highlight here is cadence and escalations, AI systems and their inputs evolve. Oversight, must be periodic and structured, not episodic and reactive.


From my experience, boards benefit from a consistent format and cadence in most board related matters, especially on AI reporting. That covers, in this case, performance indicators, risk metrics, compliance developments we're dealing with a very uncertain regulatory environment. Incidents consistency allows directors to see trends and identify gaps.


And as part of kind of like the cadence and escalation, the board should also establish explicit escalation threshold for significant issues like customer harm, data breaches, systemic model failures, or regulatory inquiries. They should trigger notice to the full board and or the responsible committee since we're talking about litigation risk.


Documentation, as we know, is just as important as the practice itself and litigation board oversight should be documented. Minutes, escalation logs, dashboards. So the corporate record clearly reflects active board engagement. Board expertise is also very important. In 2025, about 45% of Fortune 100 companies referenced AI and director qualifications, from 26% in 2024.


I would say that the practical expectation here is. Collective competence. While not every directors is expected to have AI expertise, the board as a whole should have sufficient understanding of AI capabilities and risks to test management's assumption. And lastly, but very importantly, formal allocation of oversight responsibility between the full board and or its committees.


This is important because where AI oversize sits, matters because it shapes who asked the questions and what the expertise behind the question is. So enterprise ai, risk, cybersecurity exposure, model risk management, regulatory compliance may naturally sit with an audit or risk committee. Bias, workforce implications, ethical use considerations may align with nominating and governance committees, and where AI is scored to product development and innovation.


Strategy oversight may also involve the full board or technology committee where one exists.


Nithya Das: I think I picked out a few kind of practical implications for some of our, some of our in-house legal and governance teams in terms of AI oversight, a consistent practice of reviewing with the board.


Your AI governance framework which Kaitlin, I'd love to ask you about in a second, making sure that you're documenting those processes of, of reviewing these materials and the board oversight. Then the last one that you touched on, which I thought was interesting, is the, the board expertise on ai, and so making sure that you do have kind of that competency across the full board.


Kaitlin, I'd love to, to bring you into the conversation and get a sense from you, you know, AI governance frameworks for. That boards should expect to see from management. Been hearing this, this, phrase, if you will, of AI governance frameworks going around quite a bit. What are some of the most essential elements of it, and where should it live?


Because I think to a Elena's point, AI competency, AI oversight isn't restricted to just one part of the board, but where should that AI governance framework live?


Kaitlin Betancourt: That's absolutely right. And just to build on what El Elena was saying, so in order to actually facilitate, systematic briefing to the board and proper escalation, it, the company needs to have at the management level sufficient AI governance infrastructure.


And so that's what, you know, is referred to when, when you hear the phrase AI governance framework. It really refers to what is the infrastructure that is being operationalized at the management level to ensure that risks associated with AI usage are being properly managed and in this very complex and uncertain regulatory environment.


We find that it's it's best practice to have a holistic principle, bits based framework in place that includes. Not only an AI acceptable use policy, which at this point is, is basically table stakes because you need to make sure that your employees know what they're permitted to do and not permitted to do.


So that's one key element of a governance framework that acceptable use policy. But that's just one policy that governs, you know, employee usage of AI and sort of the company's principles around AI usage. So what do we mean by the broader framework? I mentioned operationalization, right? So how do you actually make sure that, that the policies you're putting in place and the procedures are being followed?


So the way that we are advising our clients to incorporate governance into their processes and procedures are to consider it as part of the broader enterprise risk management framework. This may be done in different ways depending on the size of the company, the maturity of the company.


For smaller companies, there may not really be a formal enterprise risk management program, but in larger companies there likely is a broader enterprise risk management program. And AI governance should really be integrated because it cuts across so many functions and disciplines.


We shouldn't be siloed. And then getting back to the key elements. It's go. It gets back to kind of the saying of if everyone's doing it, no one's doing it. Someone responsible for the program. Where is the oversight at the management level in order to be able to escalate to the board, someone needs to have responsibility for theme.


AI governance program. Now, that doesn't need to be one person. It could take the form of an AI governance committee, for example, and that governance committee is, the responsibilities of the committee are set forth in a charter so that they have their responsibilities outlined for them, and they have parameters by which to work with.


The other key elements of an AI governance framework include. Processes for evaluating new AI tools and new models. So that could be proprietary models that the company is developing itself, or that could be third party tools. And so there should be some sort of intake form, for example, an intake form that gets evaluated by the proper stakeholders within the company, and then potentially the governance committee.


And then every usage of ai, every new usage of AI should be incorporated into a register. So the company can easily find where they're using ai and this all links together. This comes together in the acceptable use policy, for example, so employees can see what tools can I use and for what purpose.


Nithya Das: I even see a lot of parallels in your description of the AI governance committee to how I've always run enterprise risk committees within the companies that I've been a part of, where typically we'll have two people, myself as Chief legal Officer plus one other, maybe the CFO Co-chairing or Enterprise Risk Committee.


Functional leaders across the company. We use a somewhat similar construct for our AI governance committee here at Diligent. I guess I'm curious, Caitlin, who do you typically see either. Chairing running co-chairing AI governance committees with clients that you're working with?


Kaitlin Betancourt: Yeah, I think it's consistent with what you just mentioned.


So very frequently there's the chief legal officer, the chief information security officer, chief technology officer, and then perhaps like a, a Chief people officer.


Nithya Das: So at Diligent we're pretty heavily focused on the the board and leadership governance component.


And so I'm curious, as management teams start using AI to draft their board materials or summaries, board minutes, et cetera, how should they think about topics like records retention? And which AI generated materials could realistically maybe become part of their corporate record, and any best practices that you guys might offer for how they can avoid creating unnecessary risk.


Elena Hera: I don't know if you wanna start us off with that one. Sure. Happy to. It's a difficult and complex question because we don't yet have full or clear legal guidance on some of it. I think the operating assumption is that. Any material that meaningfully informs a board decision may become part of the corporate record and those subject to discovery.


And that includes board books, exhibits, summaries, circulated to directors, incident reports, and if retained in corporate custody prompts, outputs, and audio or text. Transcriptions, and I wanna kind of like highlight here this idea of not only the final product, but also the inputs used to generate these materials.


Because I think that's where we tend to be a bit more unguarded than perhaps we should. So, you know, think about drafts uploaded into the system, background memorandum, and again, specific interrogative prompts entered by management, presumably to generate those materials. Yeah, I'll caveat this by saying that I'm not the litigator per se, but I can highlight a few principles here from my own board advisory practice.


And the first one I think would be kind of like the fundamental managed risk through discipline. So use approved, secure AI platforms governed by contractual confidentiality and data use limits confirmed with vendors whether inputs are retained or used for model training, and for how long. Then classify.


AI artifacts by materiality and align them with an aligned retention kind of like accordingly. So as I mentioned before, while materials that reach the board are presumably by definition material companies may be able to or justified, and. Documented apply shorter defensible retention periods to drafting stage inputs like interim drafts or prompts provided.


Those retention policies are explicit, auditable, and should it come to be subject to suspension under legal hold. And this is actually kind of like an. A small point, but an important one, when you implement safeguards you do have to ensure that legal hold and e-discovery processes can capture logs, prompt histories, et cetera, when preservation is like actively required.


And another important concept implicated here is like privilege, right? And I think Kaitlin will address a recent case on the topic. So here, be explicit when AI tools are being used at the direction of council where appropriate, and label materials accordingly to support privileged claims, limit distribution of FA generated drafts to those who need to review them and not more broadly, store them within systems subject to existing document management and privilege protocols.


You know, when you draft the documents for the board, if you want, kind of like a clear cut delineation, avoid commingling privileged legal advice with non-privileged business analysis. And again, as I mentioned earlier, ensure that prompts, uploads, et cetera, are used for legal advice, are generated within secured environments, covered by confidentiality agreements, and confirm that with vendors.


And a couple of other points, like this idea of you know, we tend to, it's almost like taking a photo on a cell. You tend to be like less deliberate than when you use a film, right? So you can have multiple AI drafts, multiple like queries. Make sure you treat AI drafts. Provisional require a human reviewer.


Ideally, someone within the legal function to review, edit, and formally approve the content before it circulated and stored as part of the corporate record. Clearly labeled drafts as privileged and confidential draft were appropriate and state explicitly that any AI generated content is. Preliminary and subject revision so that it's not second guessed as kind of like final and conclusory.


And one final important point that I want to make, I according. Element of effective board deliberation is open candid exchange of views. Recording meetings verbatim versus AI tools, I think can risk chilling. The candor, because participants may assume every word is permanently logged, analyze, potentially discoverable.


So I would caution, not necessarily against this practice, but I would caution kind of like thoughtfulness around it for two reason. One is the obvious one. Litigation exposure from, raw unedited logs. And the second one, which is a bit more kind of like a human element, like this erosion of frank deliberation and the board potentially being on guard.


And if a company does choose to proceed with recording board meetings in this way to facilitate minute production, it needs straight protocols, gov governing, notice consent, access, controls, retention and privilege. And I think also candid conversation with board members about whether this type of approach fits the boric culture and its appetite for these risks.


Nithya Das: I think that's a great point. I've had many discussions with peers around the potential impact on board culture, and I guess I'll just. One of the things we've done is we've actually built our minutes product to allow for taking minutes without recording. So creating minutes from shorthand notes or creating a first draft of minutes from the board deck.


We do allow the ability to upload a transcript, but I was. Pretty surprised over the course of 2025 by the number of client requests that we had for the ability to record board meetings. And it was much higher than I thought that it would be. And I think it's in part because there is this desire to be able to quickly capture action items and minutes, just like we might if we were, using teams or, or Zoom to record a meeting.


And so, one of our design principles is privacy and security by design is we develop these tools and so we've been thinking a lot about even just small features, Alana, like the ability to toggle recording on and off so that you can still, for a more sensitive discussion, facilitate the open and candid.


Candid communication, which I think is a really great point. Kaitlin, Elena touched on this a little bit, but there's been a lot of talk about, ensuring that there's a lawyer in the loop. When AI is used in any kind of legal or quasi-legal context in the boardroom setting, where would you recommend drawing the line between.


Acceptable uses of AI tools by directors and situations where kinda like we were just starting to touch on where it might be too unethical or risky to proceed without a formal legal review.


Kaitlin Betancourt: Yeah. You know, I think as we've highlighted throughout this podcast, an evolving area and really underscores the need for AI literacy and governance with particular attention to emerging legal developments.


So, for example, Elena referenced this. There was a recent, as recent and as a week and a, a little over a week ago, a recent decision by Judge Roff, in which the US District Court for the Southern District of New York ruled that documents a client created using commercial generative AI tool and sent to his lawyer were not protected by privilege.


There really needs to be significant attention and judgment applied along with efforts taken to bolster privilege claims when that is the desired outcome. And so companies should really pay attention to this, consult with counsel and sort of stay tuned on the guidance that comes out, especially after that decision.


So, for example, we'll be drafting a client alert where we


Nithya Das: i'll open this up to either of you, but if you were sitting on a board today, what specific questions would you ask management and the general counsel about? How AI is being used in the board workflows, how AI related data and outputs are being retained, and how you as a director can demonstrate that you've exercised the appropriate level of legal and ethical oversight.


I know that's a, that's a big question, so feel free to tackle it in smaller pieces if


Kaitlin Betancourt: Yeah, I can, I can take a crack at this first. So, I mean, you know, if I were sitting on a board today, I would really be thinking about, first I would wanna identify like, what are the key risks for the company?


And in order to even parse and understand those risks, I need to make sure that I'm, I am educated, myself on AI technology. And so I would be thinking about how as a board. Are we getting the expertise we need in order to ask the the challenging questions of management about their management of those risks?


So once we identify the risks for the company, then I would be thinking about questions that are posed to management about their risk management of ai, their governance of ai. How is the company dealing with it holistically? And how am I getting sufficient information to continually assess those risks?


Sort of at, on a lifecycle basis because it's not, it's, it won't just be a snapshot in time. This will be something that as a board, we need to keep tabs on into the future.


Elena Hera: And if I can ask some people there Kaitlin, I feel like AI can seem overwhelming, right? Like it's moving very fast. There are so many applications that.


It could potentially be useful to the company. I just kind of wanna insert this idea of materiality. Obviously the board is not responsible for day-to-day risk oversight, so it kind of needs to like, in addition to making sure that the company, like Caitlin said, has an enterprise management system that's like functional and reliable in connection with the risk it looks at the boards, I think needs to develop sophistication and understanding kind of what's material.


What's not so that it can properly allocate, you know, kind of attention and resources. And this involves the education at the AI tools itself and also kind of again, how they fit the bigger company


Kaitlin Betancourt: strategy and products and processes. I'll also add that when we talked about governance, we didn't touch on, you know, vendor due diligence and actual risk assessments.


Nithya Das: so I wanted to just make sure that we don't end this conversation without touching on that topic, what questions should board members be asking about actually using AI in the governance workflows?


Elena Hera: I would be interested in some of the points like we already touched upon, meaning, what AI tools case can I safely use? Do I have to use AI like company provided AI tools. What categories of information prompts like outputs are discoverable? How should I think about that?


So it, I would wanna make sure that I'm sufficiently educated with regards to like my own risk and using AI and the risk that kind of like I'm imparting to the company through that.


Nithya Das: Yeah, I guess maybe two others I would maybe throw in there would also be just making sure I understand what's happening from a training data perspective.


And then I think you both touched on records or retention earlier. But that would be another one that I would, I would wanna make sure that I understood. So I did an interview with Governance Intelligence recently, and I think I, I made a statement to them, something along the lines of, I think in 2025, the question was, what are the risks of using ai?


It feels as though we get further and further into 2026. The question is becoming more, what is the risk of not using ai and maybe a provocative statement, but I personally think that for board directors to not avail themselves. AI tools is going to at some point mean that they are not fulfilling their duty.



There are so many tools that are available to us today, some safer than others, of course, but these tools have the ability to do a deeper level of analysis to bring in other data and insights that are not. Readily available to us as just human beings. And that's where, you know, I think that at some point you cross, cross a line of maybe not fully doing your job if you're not using ai.


And I'm, I'm curious to know what your reactions to that statement is.


Elena Hera: I agree with that. I think, the crux of board management interactions is information asymetry, right? Like the management is so like immersed in day-to-day operations, and then the board you know, provides oversight from somewhat the distance.


So to actually use AI tools to minimize this information asymmetry or to make sure that the information you're actually getting data. Conveys kind of like the information you need to be getting. You know, summarizing, benchmarking probing into what's being provided and board packages, like we've all seen them.


I definitely agree that AI is integral to more effective board functioning. I think it'll change the tempo of governance. With boardrooms becoming less reactive and more predictive. And I kind of like had this thought that this might raise interesting question about the board potentially infringing on management's territory if they can get like more sophisticated peer comparisons and again, like extract more value from the data that they're being provided.


So I'll, I'll be curious to see. How that evolves, the balance of board oversight versus management's remit.


Nithya Das: I actually feel like maybe you've been sitting in on a couple of our conversations that you have not, especially on the last point around the lines between management and the board is we've been building out our agentic capabilities here.


That's been something that's been really top of mind for me. And Kaitlin, I don't know if you wanna offer anything else on, on that, on that question.


Kaitlin Betancourt: I think it's really evolving. I think we want our boards to be smart about how they're exercising their duties, and I think it's quite nuanced.


So I would say it's, it's not black and white. I think Elena's point about or is becoming more predictive unless reactive is a really interesting one. And I think that the role of the board is very focused around. Judgment. And so how the board could use AI powered tools to assist it in exercising good judgment is something that, you know, I think would, would be worth a deeper dive.


Nithya Das: Yeah. Well, as you said, it's definitely not black and white, and I think that is where the most interesting questions and the most interesting changes end up happening. So this is one. I'm super excited to watch and, you know, of course have the privilege of being here at Diligent, uh, right in the driver's seat, helping to bring about some of these changes.


So I wanna just wrap us up with a couple of questions that we ask all of our guests, and we can keep these kind of quick lightning fire round. We'll start with Elena and then Kaitlin, you can answer it and we'll go through these, these three questions. But what do you think will be the biggest difference between boardrooms today and boardrooms 10 years from now?


Elena Hera: The idea of AI changing the type of governance with board rooms becoming less reactive and more predictive. I think this will be one of the interesting and worthwhile outcomes of ai.


Kaitlin Betancourt: Yeah, and I would say technological savviness, I think will change as well as board expertise and composition and how that's shaped.


All right. Very interesting. Alana, what was the last thing you read,


Nithya Das: watched, or listened to that made you think about governance in a new light?


Elena Hera: I think it's the idea that governance is not just about supervising complex systems and the paradigm shift that needs to happen for this broader oversight mandate to be effective.


Kaitlin Betancourt: So I've recently, I actually took a board readiness course. So this is, this is both an answer and a plug for a company that we sponsor the Athena Alliance. It's an executive development platform that empowers leaders and, and board members. And I did take their board readiness course and I found it very illuminating in terms of practical advice on what you should be thinking about when you're, when you're part of a board.


Nithya Das: And then for both of you, what's your current passion project


Elena Hera: I would actually say it's my own relationship with ai. Like the art of using it. Not about just generating answers, but framing the right questions, interrogating the output, calibrating, reliance.


It's been a really interesting ongoing self-education project whereby you can observe that AI can accelerate the analysis, but judgment still has to be human. So kind of like striking the right balance between. Efficiency and caution slash reliance, like how much do you delegate out?


Kaitlin Betancourt: Yeah, and for me, I am immersing myself in biotech learning, so I am going to be focusing in on, on life sciences and AI and the context of drug discovery.


So I am currently learning a lot about DNA and I'm really enjoying it.


Nithya Das: That's amazing. Very impressive and sounds like you both probably have very little downtime between, day jobs and those passion projects. But that's awesome. Thank you for joining us on the podcast today, and thank you for sharing all of this great guidance and thoughts on how our clients can safely tackle AI as we go into the rest of the year.


And we'll definitely be keeping an eye out for that, that client alert. Thank you both. Thank you for having us.


Meghan Day: All right, Johnny, that was such an in-depth interview. I think that's what happens when you have lawyers, interviewing lawyers, but I really appreciate the, the commentary and the context and honestly. I was a bit surprised, you know, it could have been a much shorter interview in that I was sort of expecting a room full of lawyers to say, don't use ai.


Dottie Schindlinger: You know, honestly, I, I think we're kind of past those days now, Meghan, right? Like, if you're, you know, it's like saying don't use your phone. Yeah. You know, I think honestly, it's you, you've gotta just understand that this is now table stakes and it's everywhere. It's in wide usage and you just have to figure out what to do about it and how to deal with it.


Appropriately. And so that was why I thought it was great to have them come on the show. But I would also say, you know, it is such a hot topic. I don't know if you caught, there was an article, uh, I guess it was a couple weeks ago now, in agenda, board Agenda. I love that publication by the way. Just absolutely love it.


And it's still in the realm of academia, so acknowledge that this isn't yet reality. But you've got academics calling for informed stewardship. So what they're saying is that the board should be made liable for using AI in boardroom settings to make decisions, because if they're not.


They're not performing their duty of care, did I call it? Or what? Meghan, we knew this was gonna start to happen this year. Again, it's coming out from an academic, so it's all sort of a think piece, but honestly, I think we're gonna start to see shareholders pick up on this as well.


Meghan Day: But on the flip side, at some point not leveraging ai, is that a breach of duty of care as well?


Dottie Schindlinger: Well, that's what they're arguing. They're saying, you know, if you don't use ai, it's, it's breaching duty of care because you're not making the best decision. You don't have enough information to make the best decision. And then on the flip side, I think you're gonna see people saying you used AI inappropriately.


Are you? Had AI do your job for you, and it didn't go well. And we all know very bad things can happen in, in that regard. So it is interesting. I'm glad we had a chance to have this episode, Meghan, because there is so much to unpack. This may be one that we need to do again at some point, maybe a few months or a year from now, because things are gonna change and they're gonna change very, very quickly.


Meghan Day: That was my big takeaway is that this being AI is changing the tempo of governance. You know, this idea of being reactive is a thing of the fact of the past. A lot of ways it's about being predictive. Now, as board members, we always joke about having a crystal ball, but. There's a lot of power to what AI brings to the table.


With great power comes great responsibility, though. That was also the big takeaway from the conversation for me.


Dottie Schindlinger: By the way, you haven't brought it up yet, Meghan, but I feel like you might need to bring it up. The Rant by Matt Schumer. On X. Oh, yes. Do you wanna talk a little bit about that? Because I, I took the time and read that and then read the article about the rant on X and honestly, I couldn't stop thinking about it for a second, for ever since I read it a week ago.


Meghan Day: We'll link out to this and I agree with you, Dottie. This is something that has been like just lodged in my brain since I first read it. A. A gentleman who works in the AI AI space basically wrote what he describes as an explainer about AI for his parents.


But. Goes into what I think is the clearest explanation about the potential impact and the speed of change that we are now facing. And for me, it has just caused me to take a step back, one, think about job security in a way that I haven't before. In that, holy crap, are we all just gonna be unemployed?


You know, six months, eight months, 12 months down the line? I don't wanna think about that too much, but you know. This. That's what we're talking about in terms of the level of disruption here, the world human beings, how we operate. All of this is going to be fundamentally different, very, very quickly.


Dottie Schindlinger: I think the thing about this particular rant that he created, and by the way, you described it perfectly, I mean it's, you know, describing AI to your parents.


It's great. The thing that I didn't know was that. This all changed on February 5th, 2026. Yeah, that basically there were two major AI labs that released new models on the same day. So OpenAI released chat GPT 5.3, and Anthropic released Opus 4.6, and both of them are so beyond advanced from where they were days before, that the way he describes it is something clicked for me.


Not like a light switch. More like the moment you realize the water has been rising around you and it's now at your chest. And he goes on to provide a bunch of very specific examples of things that happen now where. For the first time, he's seeing these AI models exercise things like judgment and taste and doing their own testing of their own work, and then refining it in real time.


He's like this system now does my job better than I can. And this is someone who's been working in technology his entire career. It is really worth a read. And I think it, it doesn't matter if you, work in technology or anywhere else, it is worth reading this article to just understand the breathtaking speed with which this is upending everything.


And the fact that we haven't all yet noticed is that it's happening kind of on the front lines right now. It's really happening to coders and content creators and marketers and others like that. But then once you see how it is changing things and how dramatically things are changing, it's a matter of time before


it is perceived everywhere, in every part of life, in every part of your daily life. And it's gonna happen faster than any of us are ready for. So I think this is definitely worth a read. And I think the timing of it was quite interesting after talking to a bunch of lawyers about UC ai. I think it's really quite relevant.


Meghan Day: Yeah, I was having lunch with my dad last week and he basically. Equated the rise of AI to the invention of electricity in terms of the impact it is going to have on the world. Yeah, and if you put it in that context, that's pretty profound to me.


Dottie Schindlinger: I have to say, like there was a line that he put in there that I was so glad he put in there.


He's like, you know, the line is, but I tried AI and it wasn't that good. He's like, I hear this constantly. I understand it because it used to be true if you tried chat GPT in 2023 or early 2024 and thought this makes stuff up. Or it's not that impressive. You were right. He's like those early versions were genuinely limited.


They hallucinated, they confidently said things that were nonsense. He's like, that was two years ago. And an AI that is ancient history. So that kind of like set a cold shiver up my spine when I read that line because it's true. The models that are available now are unrecognizable from what existed six months ago.


That's the pace that we're talking about. And yeah. We'll put this on the podcast page, give it a read. Maybe we'll come back, Meghan, and have a conversation next time about this issue in, in more detail, but really interesting stuff.


Meghan Day: Awesome.


Dottie Schindlinger: Well, that wraps up another episode of the Corporate Director Podcast, the Voice of Modern Governance.


I'd like to say a few special thank yous, first and foremost to our AI and legal experts, Nithya das Elena Hara and Kaitlyn Bettencourt, our podcast producers, including Laura Klein, Kira Ciccarelli, and Steve Clayton. The sponsors of our show, including KPMG, Wilson Sonsini, and Meridian Compensation Partners, and most especially, thank you to Diligent for continuing to sponsor this show.


If you like our show, please be sure to give us a rating on your podcast Player of Choice. You can also listen to our episodes and see more from Diligent Institute by going to diligent.com/resources. Thank you so much for listening.


Intro/Outro: You've been listening to the Corporate Director podcast. To ensure that you never miss an episode, subscribe to the show in your favorite podcast player.


If you'd like to learn more about corporate governance and tools to help directors do their job better, visit www.diligent.com. Thank you so much for listening. Until next time.

Podcast

· Jan 28, 2026

· 1 min read

What Directors Think 2026: AI, M&A and the next era of board oversight

By Kira Ciccarelli

In this episode of the Corporate Director Podcast, Diligent Institute’s Kira Ciccarelli sits down with Melanie Nolen, Head of Research at Chief Executive Group, to unpack the latest findings from t...

Podcast

· Feb 25, 2026

· 1 min read

Understanding sovereign AI: What boards need to know now

By Dottie Schindlinger

In this episode of The Corporate Director Podcast, host Dottie Schindlinger sits down with board member, CEO and strategist Josh Klein to unpack what evolutions in AI technology will mean for entir...

Podcast

· Jan 14, 2026

· 1 min read

Governance as a competitive advantage

By Dottie Schindlinger

In this episode of the Corporate Director Podcast, host Dottie Schindlinger welcomes returning guest Anddria Varnado, board member at Pattern Beauty, Red Robin and Columbia Bank, to explore how the...