Tag

AI Agents

Tag

GenAI

Tag

Oracle AI

Tag

Enterprise AI

Tag

AI Strategy

Tag

AI for ERP

Tag

Industry - Higher Education

AI + ERP in Higher Education: Why Strategy and Data Governance Matter More Than Tools

A candid conversation on how universities can adopt AI responsibly, without falling into the trap of reactive tech decisions and broken data foundations.
Michael Mathews

Guest Speaker:

Dr. Richard Hartwell

CIO & VP of Institutional Assessment at Neumann University

Ratnakar Nanavaty

Host:

Ratnakar Nanavaty

Chief Strategist, Astute Business Solutions

Episode 03

Guest Speaker

Anup Ojah
Anup Ojah

Global HPC & AI - Leader, Cloud Engineering, Oracle

About Guest

Dr. Richard Hartwell is a senior higher-education technology and data leader with extensive experience in analytics, ERP systems, and institutional strategy. He currently serves as Chief Information Officer and Vice President of Institutional Assessment at Neumann University, where he leads IT, data governance, and enterprise analytics initiatives that support both academic and operational decision-making.

In addition, Dr. Hartwell is a faculty lecturer at the University of Pennsylvania, teaching data analytics and machine learning. With a career spanning institutional research, technology leadership, and applied data science, he brings a practical, leadership-focused perspective on how universities can adopt AI thoughtfully, build trust in data, and align technology with mission and values.

AI is quickly becoming a board-level conversation in higher education, and many CIOs are feeling pressure to “do something” fast. But as Richard explains, AI isn’t something universities can implement reactively or treat like another software rollout.

In this episode, we unpack what actually needs to happen before AI can deliver value inside a university: cross-functional coordination, clean and trusted data, strong governance, and leadership-driven strategy. Without these foundations, AI won’t fix reporting mismatches, operational inefficiencies, or siloed decision-making; it will only accelerate them.

Richard also shares a powerful framework from his work at Neumann University: mapping the “ERP ecosystem” and identifying where data breaks down, especially at the boundaries between departments like enrollment, registrar, faculty, and finance. The takeaway is clear: the biggest barriers aren’t technical, they’re cultural.

Top 10 Highlights / Takeaways

  • AI adoption should never be reactive — universities need an enterprise strategy, not a board-driven panic response.
  • AI success depends on trust: trust in data, sources, processes, and outputs.
  • Data governance is not an IT project — it’s a leadership issue.
  • Siloed reporting is the warning sign: when IR and finance reports don’t match, AI won’t solve it.
  • Most data problems happen at “boundaries” — where student information moves between phases (recruiting → registrar → faculty → finance).
  • Root cause analysis still matters — before AI, institutions must fix broken processes producing bad data.
  • The hardest barriers are cultural, not technical — “we’ve always done it this way” blocks progress.
  • Faculty and operations are adopting AI differently — and institutions must merge these contexts into one cohesive strategy.
  • Security and privacy policies are critical — especially around faculty using public AI tools with student data.
  • Mid-tier institutions are most at risk — large universities are already prepared, but smaller ones may not have the resources or readiness.

AI in Higher Education Isn’t a Tool Problem, It’s a Leadership and Data Problem

If you work in higher education, you’ve probably felt it: AI is no longer a “future trend.” It’s a present-tense pressure.

Boards are asking questions. Faculty are experimenting rapidly. Vendors are showing up with shiny demos. And CIOs are stuck in the middle trying to balance urgency, governance, security, and operational reality.

In this episode, Richard offers one of the most grounded perspectives on AI + ERP in higher education, and it starts with a simple truth:

  • AI won’t fix broken data.
    And if universities don’t understand that now, they’ll learn it the hard way.

Why “Board Pressure AI” Is the Wrong Starting Point

Richard’s first point is one every university leader should tattoo onto their strategy documents:

  • You don’t adopt AI because you feel pressured.
  • You adopt AI because it fits your mission and strategy.

Like every major technology shift, AI will influence nearly everything, teaching and learning, enrollment, operations, finance, analytics, student experience, and institutional decision-making.

But the institutions that rush into AI without a cohesive plan will end up with fragmented initiatives: faculty experimenting here, departments buying tools there, and IT scrambling to contain risk.

The result? More silos. More inconsistent reporting. More mistrust.

AI Requires an Enterprise Strategy (Not a Departmental Hobby)

One of the strongest themes in the conversation is that AI has to be treated as an enterprise-level initiative.

That means:

  • Alignment with institutional mission and values
  • Cross-functional leadership involvement
  • Shared definitions of key data
  • Governance and accountability
  • Deliberate vendor selection

If AI isn’t aligned with the institution’s identity and operational priorities, it becomes “ad hoc innovation.” And ad hoc innovation rarely scales.

The Real Foundation: Data Governance + Shared Definitions

Here’s the uncomfortable reality Richard points to:

Many institutions still struggle with a decades-old issue, reports that don’t match.

Leadership receives one report from Institutional Research.

Another report from Finance.

And the numbers don’t align.

That’s not just annoying.

That’s a sign the institution doesn’t have unified data definitions, data quality standards, or governance.

And AI can’t repair that.

In fact, AI will likely make it worse by generating faster insights from inconsistent inputs, creating more confidence in flawed results.

Richard frames this clearly:

AI needs high-quality data, trusted sources, and trusted processes.

Boards can’t “wish” that into existence.

The ERP Ecosystem: Where Data Breaks in Universities

One of the most valuable parts of the conversation is Richard’s practical framework:

Map the ERP ecosystem like a complex system.

Instead of treating enrollment, registrar, faculty, and finance as separate domains, the institution maps how a student (and student data) flows through phases:

  • Recruiting and enrollment
  • Onboarding
  • Course registration
  • Course delivery
  • Grading
  • Billing and finance
  • Reporting and outcomes

Then the institution looks for the truth:

  • Most data issues happen at the boundaries between phases.
  • Enrollment does it one way.
  • Registrar does it another way.
  • Faculty assumes it’s happening differently.

And that’s where errors multiply.

This is where AI becomes dangerous if governance isn’t in place, because AI systems rely heavily on consistent definitions and structured information flow.

The Hard Part Isn’t Technical, It’s Cultural

Richard makes a point that seasoned higher-ed leaders will recognize immediately:

The technical barriers are solvable.

The cultural barriers are brutal.

Every institution has some version of:

  • “We’ve always done it this way.”
  • “That’s not how our department works.”
  • “Why do we need to standardize?”
  • “This system is fine.”

AI doesn’t remove those barriers; it exposes them.

And to overcome them, institutions need leadership-driven change management, not just IT upgrades.

Faculty vs Administration: Two Different AI Realities

The conversation also touches a fascinating tension:

Faculty are often early adopters, using AI to build lesson plans, improve learning materials, and even help students code.

Richard even shares his own experience teaching data science and using tools like ChatGPT and CoLabs to enhance coding.

But faculty AI use creates new questions:

  • How do we detect cheating?
  • How do we ensure academic integrity?
  • What’s acceptable vs unacceptable use?

Meanwhile, IT and operations have been using AI for years, especially in cybersecurity, log management, and monitoring systems.

So the real challenge isn’t “whether AI is coming.” It’s already here.

The challenge is blending these two contexts into one institutional strategy, without exposing the university to privacy risk or operational chaos.

Early Adoption: Analytics and Administrative Disruption

When asked where AI will show up first on the administrative side, Richard highlights a key area:

Analysis.

The ability to feed in spreadsheets and generate:

  • Predictive modeling
  • fFnancial summaries
  • Sentiment analysis
  • Operational insights

…is already disrupting workflows.

But even when AI produces accurate results, Richard shares a surprising truth:

People still don’t believe it.

He gives a great example: a student survey sentiment analysis produced a clear and accurate summary, but leadership distrusted it because only 10% of students responded.

This is a perfect snapshot of the real AI challenge:

AI adoption isn’t about output quality alone.

It’s about trust in data, trust in process, and trust in interpretation.

One Year From Now: The Big Question Universities Aren’t Ready For

Richard closes with a bigger concern, one that feels increasingly relevant:

  • AI isn’t just a technology issue.
  • It’s also a sustainability, ethics, and resource issue.
  • AI data centers consume massive amounts of energy.
  • AI infrastructure is expanding into economic zones.
  • AI adoption requires money, human capital, training, and governance.

And Richard’s biggest worry is this:

  • Mid-tier institutions may not be ready for the investment and transition.
  • Large universities already have frameworks, platforms, and teams.

But smaller and mid-sized institutions may struggle, not because they don’t want AI, but because they don’t have the resources and readiness.

Final Thought

If there’s one message this episode delivers clearly, it’s this:

AI in higher education isn’t a software rollout. It’s an institutional transformation.

And transformation starts with:

  • Leadership
  • Strategy
  • Governance
  • Trust
  • Cultural alignment

AI won’t fix broken data.

But it will force institutions to finally confront it.

Episode Number: 03

14:54

Top 10 Podcast Highlights / Takeaways

1. AI success starts with a leadership mindset, not technical expertise.

Executives don’t need to “speak AI”—they need to think AI-first.

2. Most enterprises are overwhelmed by AI noise.

Clear frameworks help cut through hype from hyperscalers and vendors.

3. AI should always be tied to ROI.

If it doesn’t solve a business problem, it’s just a shiny object.

4. A six-agent framework simplifies AI adoption.

Business tasks, conversational, research, analytics, domain-specific, and developer agents cover most use cases.

5. Trusted data is the foundation of effective AI agents.

Especially for deep research and analytics-driven insights.

6. AI-driven strategy days are becoming common.

Enterprises want focused sessions to understand AI strategy, demos, and next steps.

7. Live demos accelerate understanding and buy-in.

Seeing agents in action sparks practical ideas across finance, operations, and customer support.

8. Most organizations are early or mid-journey.

Very few are truly mature in agentic AI adoption.

9. The AI skills gap is real—and growing.

Self-learning, partner ecosystems, and internal AI councils are key solutions.

10. BI tools, as we know them, may fade away.

AI-driven, self-service analytics will soon be available to every business user.