Methven Forbes, CEO Fuller and Forbes Healthcare Group, discussesimproving productivity and streamlining care without sacrificing clinical quality
For decades, technology has promised to revolutionise healthcare—streamlining access, reducing inefficiencies, and delivering better outcomes for patients. In recent years, artificial intelligence (AI) and digital health tools have accelerated that promise, reshaping everything from workforce planning to predictive analytics. Yet the reality is more complex. Innovation moves faster than regulation, faster than workforce adaptation, and—most dangerously—faster than our ability to safeguard clinical standards.
If we treat AI as a silver bullet, we risk creating brittle systems: efficient on the surface, but fragile underneath. Instead, the opportunity is to embed AI and digital health in a framework that values quality over raw efficiency. Just as Quality-Based Healthcare (QBHC) offers an alternative to narrow outcome-driven models, we now need a Quality-Based Digital Care approach—one that leverages innovation while protecting professional accountability and patient trust.
Health systems transformation
The most immediate impact of digital health is in system design. Nowhere is this clearer than in primary care, where rising demand, staff shortages, and administrative burden create relentless pressure.
Here, e-consultation systems, such as our own SMARTconsult and SMARTnavigation, provide a case study in transformation. Since its launch in 2018, SMARTconsult has managed more than 500,000 structured e-consultations across our UK primary care medical centres. Unlike generic online forms, SMARTconsult is built around condition-specific templates—more than 70 in total—each designed by GPs to capture structured histories, embed red-flag logic, and generate concise clinical summaries.
Our SMARTnavigation system, launched in 2024, has supported receptionists in triaging over 200,000 patient contacts. Receptionists have long borne the brunt of patient interaction, while being asked to ensure patients are seen at the right time, with the right person, first time. Yet these non-clinical staff must respond to an extraordinary range of clinical and administrative inquiries—set against the backdrop of more than 85,000 recognised conditions in the International Classification of Diseases.
By embedding structured triage pathways, SMARTnavigation reduces the cognitive load on reception teams, ensuring consistency while still allowing for human judgement.
Yet technology alone is not enough. Transformation only succeeds if health providers remain the data controllers, with digital tools serving their workflows rather than dictating them. Clinical guidance and evidence-based practice may rightly be set at regional or national levels, but their application is always nuanced locally. Every provider serves a unique community, with different funding mechanisms, demographics, levels of deprivation, and patterns of demand. What works in one setting may require adaptation in another.
This is where professional judgement matters. Technology must therefore act as an enabler—supporting clinicians and teams to apply national standards in ways that make sense locally—rather than a straitjacket that forces uniformity. If systems are too rigid, “efficiency” risks becoming another word for disempowerment. If designed well, however, digital tools can reinforce clinical autonomy while ensuring safe, consistent, and equitable care across diverse populations.
Predictive healthcare
AI’s greatest promise lies in its ability to aggregate, synthesise, and predict. By analysing and categorising vast datasets, machine learning can recognise patterns that, for example, enable the flagging of patients at risk of deterioration before crisis strikes. When combined with structured inputs from digital platforms such as e-consultation platforms, the potential for proactive outreach is clear.
Imagine a system where subtle patterns in routine e-consult submissions and patient records trigger timely interventions. An AI model identifies that a 58-year-old patient with indigestion, mild anaemia, and recent weight loss may be at increased risk of upper gastrointestinal cancer. While any experienced GP could recognise these red flags, the model ensures that such patterns are not overlooked or delayed, prompting earlier investigation. At a population level, similar technology could detect clusters of paediatric cases with sore throats and fevers in a local area, enabling rapid public health alerts and preventative action before an outbreak takes hold.
In theory, predictive analytics could shift the balance from reactive care to prevention. Of course, clinicians exercise pattern recognition and clinical judgement every day—particularly with patients they are most familiar with. But whilst this intuitive understanding is invaluable, it is not possible for a clinician to do this consistently across the thousands of unique patients they see each year. With the proliferation of structured and unstructured health data in primary care, it is increasingly unrealistic to expect clinicians to read, synthesise, and interpret every relevant detail within the confines of a 15-minute consultation.
Yet predictive healthcare carries risks. Algorithms trained on historic data might reproduce existing inequalities. If not carefully designed, predictive models may flag vulnerable populations as “high risk” without the wider healthcare system allocating the resources to support them. The temptation, particularly in performance-driven systems, is to avoid complex patients rather than engage them.
That is why prediction must be embedded in a quality framework. AI should guide clinical attention, not ration compassion. Early-warning flags must be linked to proactive pathways—additional clinic appointments, targeted outreach, or enhanced pharmacy support—so that risk becomes an opportunity for care, not an excuse for exclusion.
Workforce and clinical practice
If prediction highlights future demand, then workforce transformation determines whether health systems can respond. Much of the debate around AI frames technology as a replacement for staff shortages. In truth, the goal should be partnership.
Digital consultation platforms already show how structured templates can redistribute workload: pharmacists managing “Pharmacy First” cases, nurse practitioners handling protocol-driven templates, GPs focusing on complex or diagnostic challenges.
AI can extend this redistribution. Automated summarisation of patient records, natural language processing to code consultations, and decision-support tools that suggest guideline-based interventions all free up clinicians to spend time where their judgement matters most.
But from our experience, the lesson from frontline practice is clear: digital health must reduce cognitive load, not increase it. Poorly designed platforms that generate more clicks or duplicate data entry drive burnout rather than resilience. Conversely, systems like SMARTconsult and SMARTnavigation that produce concise, structured outputs show how digital tools can enhance rather than obstruct clinical decision-making.
Resilience also requires continuous investment in people. AI must not be a shortcut to de-professionalisation. Instead, technology should be coupled with mentoring, peer review, and ongoing professional development. If clinicians feel reduced to data processors, the workforce will fracture. But if AI is framed as an assistant to judgement—helping clinicians apply evidence consistently while retaining accountability—digital health becomes a force for professional empowerment.
Pandemic preparedness and crisis response
The COVID-19 pandemic was the ultimate stress test for digital health. Almost overnight, telephone triage, video consultations, and online platforms became the default mode of access. Some GP-led apps were even suspended for referencing the virus before regulatory frameworks caught up, highlighting the fragility of digital adoption under crisis conditions.
Looking ahead, AI-enabled e-consultation systems could form part of a pandemic preparedness toolkit. Structured digital triage allows rapid scaling without overwhelming phone lines. Automated red-flag logic ensures safety while filtering non-urgent cases to pharmacies or remote advice. Integration with secondary care and public health databases could provide real-time epidemiological insights.
But preparedness is not just about speed—it is about trust and governance. Patients must believe that digital pathways protect their safety, their privacy, and their ability to see a clinician when necessary. Providers must know they remain in control of data and workflow. Otherwise, in the next crisis, adoption will falter at precisely the moment it is needed most.
Towards a quality-based digital future
The trajectory is inevitable: AI and digital health will shape the future of healthcare delivery. The question is not whether to adopt, but how.
If adoption is driven purely by efficiency targets, the result will be brittle systems that entrench inequalities and reduce clinicians to checkbox operators. If, instead, adoption is guided by the principles of Quality-Based Digital Care, technology can reinforce resilience rather than undermine it.
Conclusion
AI and digital health have the potential to streamline healthcare provision across health system transformation, predictive care, workforce redesign, and pandemic preparedness. But the danger lies in mistaking speed for resilience. Technology amplifies whatever framework it sits within—if that framework prioritises numbers over quality, inequities will grow. If it prioritises quality, accountability, and long-term investment in patients, technology becomes a catalyst for a more resilient system.
As we move towards national requirements for digital access, systems like SMARTconsult and SMARTnavigation demonstrate what is possible when digital tools are designed by clinicians, for clinicians. They show that efficiency and safety are not opposites, and that structured, AI-enabled care can enhance both resilience and trust.
The question is not whether AI will change healthcare—it already has. The real challenge is ensuring technology supports quality rather than dictating care, so that AI protects rather than replaces the human heart of healthcare.
Contact Information
