Reggie Townsend, VP, Data Ethics, SAS, ETCISO
Reggie Townsend has spent years making the uncomfortable case that ethical AI is not a brake on progress but the very precondition for it. As agentic systems move from concept to commercial deployment — taking decisions, triggering workflows, and acting on behalf of businesses without moment-to-moment human oversight — the stakes of getting governance wrong have never been higher.
Speaking at SAS Innovate 2026, Townsend, in this conversation on the sidelines, ranged across the moral architecture of autonomous AI, the boardroom politics of regulation, a four-pillar governance framework built inside SAS itself, and why he believes India and the global south hold a second-mover advantage that the innovation-first west may be too distracted to notice. The conversation has been edited and condensed for clarity and length.
Excerpts from the interview:
There is a persistent narrative in enterprise circles — especially in fast-moving markets like India — that governance is the enemy of innovation, and that CIOs and compliance functions are the people holding the brakes. How do you challenge that framing in a boardroom?
I think we have to be honest that there are two separate conversations happening: the public narrative and the actual conversation that takes place inside organisations. And they are not the same thing.
I was in India in February for the AI Impact Summit, and I also spent time with people in the federal government there. What I found was that the people working on AI-enabled public services — water irrigation systems, citizen-facing applications — were deeply conscientious. There was no moment in those conversations where someone said, ‘We need to choose between innovation and guardrails.’ That framing simply did not exist in the room.
So who perpetuates the innovation-versus-governance narrative? That is a question worth asking honestly. I am not saying there is no tension — there is, and that is not new. Organisations have always had to govern themselves, long before AI was part of the conversation. AI governance is not the first instance of governance inside a large enterprise. It is simply a new addition to an existing practice.
When I strip it back, most of what is being advocated under the label of AI governance is really just responsible innovation practice. Who is signing up to build AI capabilities that end up harming people? No one. And so the organisations that are not trying to do harm are, by definition, already doing something to prevent it. The question is whether that something is explicit, repeatable, defensible, and auditable. That is what a governance framework provides. I describe it as a way of scaling our judgment — ensuring that the good judgment we want to bring to decisions is embedded in the system rather than dependent on any single individual on any given day. In a boardroom, that is precisely the function of a director. If a board cannot bring structured good judgment to an organisation, it is failing in its primary role.
As AI agents become more autonomous, who carries legal accountability when one of them makes a harmful or flawed decision?
According to the EU AI Act — which is currently the most comprehensive legislative framework we have — accountability rests primarily with the deployers, meaning the organisations that put these systems into commercial use. The Act draws a clear distinction between providers, who build the underlying models and infrastructure, and deployers, who integrate those systems into products and services and release them to the public.
AI accountability will likely follow a similar trajectory through the courts. Some of it remains to be adjudicated. The EU has tried to structure the framework, but court cases will fill in the gaps over time. What I observe is that a number of US states are now moving toward similar product-safety-style approaches to AI liability. The legislative patchwork is real, and it has direct consequences for product decisions: what you call your products, what features you deploy in which markets, and how you document the reasoning behind those choices.
You have been advocating for corporate self-governance. For markets where regulatory culture is still developing and where building internal moral clarity takes years, what does a practical baseline actually look like?
The framework we use internally at SAS — and that our AI governance advisory team now deploys externally — is built around four pillars, and implementation will look different depending on the organisation.
The first pillar is culture. Before you touch technology or process, you need to create the conditions inside your organisation that normalise the right kinds of behaviours. Culture is foundational, and it is also the pillar that most organisations skip because it is the hardest to measure.
The second pillar is operations — specifically, a review and redesign of standard operating procedures. AI forces you to interrogate decisions you made years ago about how and where you process data, who is authorised to make which decisions, and how your organisation is designed. Good organisational design becomes newly critical in an AI environment, and the SOPs that govern daily work often need to be rewritten from the ground up.
The third pillar is the regulatory environment. You need a clear map of every market you operate in, the current expectations in each, and how those expectations are evolving. Given the global patchwork of legislation right now, that mapping has direct consequences for your product roadmap — what you build, what you call it, and where you sell it.
The fourth pillar is controls: the auditing procedures and operational guardrails you put in place consistent with the first three pillars. Some of that can be automated. AI can help you audit AI, which is one of the more useful recursive properties of the technology.
Inside SAS, the structural expression of this framework is our AI Oversight Committee — a cross-functional team of executives who review everything we do with respect to AI, from procurement decisions to product features to internal adoption rates. We track AI literacy across the organisation, because the EU AI Act has an explicit literacy expectation. We monitor the pace of adoption, because people can only absorb change at a certain rate. And we manage the messaging — making clear to our people that AI is being introduced to augment their work, not to eliminate their roles. All of that is governance in practice, not governance in principle.
You described your biggest challenge as a cultural one — the tension between personal freedom and the guardrails that responsible AI demands, and the difficulty of getting people to ask “just because we can, should we?” In enterprise cultures that reward speed and execution above all else, how do you actually institutionalise that pause?
The pause, as you describe it, is simply a part of an overall governance strategy. It’s important that we shift the mindset that governance slows innovation. Governance scales human judgment so that it becomes a natural and repeatable part of the process. In that way, it won’t even be seen as a “pause” anymore but rather one step in the larger operationalizing of responsible innovation.
If the organization is only scaling activity (shipping models, agents, copilots), risk and lost value become invisible. Governance is how you restore visibility and accountability. It should include lightweight but real checkpoints in the lifecycle (use-case intake, risk assessment, documentation, testing for bias/robustness/privacy/security, human-in-the-loop requirements, post-deployment monitoring).
These checkpoints should be supported by clear playbooks, templates and express lanes for low-risk use cases. If governance is seen as an impediment, people will circumvent it. Ultimately, if you optimize for speed while eroding trust, you’re damaging the business.
You have written about the global south, especially a country like India could realize the true value of AI. Kindly elaborate.
What I saw in India is a different orientation toward the purpose of AI deployment. In most global south contexts, AI is being approached not primarily as a profitability instrument but as a tool for actually improving people’s lives. I visited colleagues at SAS who were working with women in rural communities — women who previously had no meaningful participation in the formal economy — who are now being trained to use drones to monitor crop health. They are earning income, building skills, and entering the economy in ways that were structurally unavailable to them before. That is not a story you hear coming out of Silicon Valley.
And that purposive orientation — AI for human benefit rather than AI for shareholder return — may produce better, more trustworthy, more durable AI applications than the profit-first model. The ethical sensibility embedded in that approach is a competitive asset, not a constraint.
You advise governments worldwide, sit on the board of EqualAI, and counsel the White House on AI policy. With all that access and influence — what is the one thing the AI industry is still not willing to hear, that you believe it most urgently needs to?
In history, we tend to look most fondly and see greatness in those who helped people achieve their dreams, rather than those who achieved dreams for their own benefit. The prevailing feeling among much of the public is that the people creating the most influential AI systems are trying to achieve their own dreams rather than working on behalf of humanity. That human piece of the AI conversation seems to get lost in the larger ambition of creating the most powerful, omniscient AI. The AI industry needs to be reminded that humans must remain at the center.
Firewall Security Company India Complete Firewall Security Solutions Provider Company in India












