My Reflections on AI, Bias & Digital Strategy at The Royal Marsden

Introduction

Back in 2022, I was interviewed by HTN about the opportunities and challenges of applying artificial intelligence (AI) in healthcare. In that conversation, I shared my thoughts on algorithmic bias, the importance of governance, how we’re thinking about patient portals and digital infrastructure at The Royal Marsden, and where I see NHS technology heading.

Here, I’d like to revisit those reflections and highlight the key themes that still feel very relevant today.


Key Themes & Insights

1. Why Bias Matters in AI

When I think about AI in healthcare, bias is one of the first issues that comes to mind. I often describe five types we need to be aware of:

  • Implicit bias – the unconscious assumptions that developers and clinicians bring with them.
  • Sampling bias – when training data doesn’t represent the full diversity of our patient population.
  • Temporal bias – algorithms trained on yesterday’s data may not hold up in tomorrow’s world.
  • Over-fitting – models that perform brilliantly in the lab but poorly in practice.
  • Edge cases and outliers – the rare but important scenarios that get missed.

Bias can creep in at the design stage, but it can also emerge over time as populations and data evolve.

2. Where We’re Exploring AI at The Royal Marsden

At The Royal Marsden, we’ve been exploring several areas where AI can make a difference:

  • Radiology: supporting abnormality detection and triage.
  • Natural language processing (NLP): turning unstructured clinical notes into coded, usable data.
  • Conversational agents: helping clinicians in real time with decision support.
  • Wearables and sensors: analysing streams of data from patients at home.
  • Population health: using analytics to allocate resources and plan services more effectively.

One concrete example I’ve spoken about is using NLP to support the stratification of lung cancer nodules.

3. Governance, Shelf Life & Licensing of Algorithms

I firmly believe that AI algorithms should be treated like licensed products. Each one should have a documented shelf life, and we need formal governance processes to review, revalidate, and—when necessary—retire them. Without that, temporal bias can easily undermine their reliability.

4. Patient Portals & Digital Infrastructure

AI doesn’t exist in a vacuum. Its success depends on the maturity of the underlying digital infrastructure: electronic patient records, portals, and interoperable systems. These foundations are what make data available and usable, and they’re just as important as the AI tools themselves.

5. The NHS Context

I’ve always tried to be honest about the constraints we face in the NHS: legacy systems, siloed data, resource pressures, and the temptation to rush into deploying shiny new technologies. I believe we need to move incrementally, test rigorously, and govern carefully.


My Reflections for Health Tech Leaders

Looking back, I’d emphasise five points for anyone leading digital health initiatives today:

  1. Bias isn’t abstract — it’s real and multi-layered.
  2. Governance should be built in from day one.
  3. Strong data and infrastructure matter just as much as the AI itself.
  4. Start small, scale fast when you find what works.
  5. Collaboration is key — clinicians, data scientists, and ethicists all need a seat at the table.

Closing Thoughts

As CIO at The Royal Marsden, my focus has always been on how digital can support better care for patients. AI has huge potential, but only if we approach it responsibly. That means acknowledging bias, putting governance front and centre, and never forgetting that technology must serve both clinicians and patients.

I continue to believe that the NHS can harness AI powerfully — but we must do so in a way that is sustainable, ethical, and grounded in the real needs of those we serve.