BRG’s 2024 Global AI Regulation Report reveals lack of confidence in compliance ability among organisations
More accurate diagnoses. Faster clinical trials for life-saving drugs. Personalised patient communications and treatment plans. Streamlined business functions. These and a myriad of other applications of artificial intelligence (AI) have dominated the attention and headlines in healthcare.
BRG’s 2024 Global AI Regulation Report found that only four in ten surveyed respondents are highly confident in their organisation’s ability to comply with current regulations and guidance. This finding supports similar sentiments shared in BRG’s AI and the Future of Healthcare report. US healthcare professionals surveyed in the fall of 2023 found that only four in ten indicated their organisations are reviewing or plan to review AI guidance. Areas of greatest concern involved the patchwork of regulations emerging in the US and cybersecurity/data management when it comes to regulatory compliance.
BRG is a global consulting firm that helps leading organisations advance in three key areas: economics, disputes, and investigations; corporate finance; and performance improvement and advisory. Headquartered in California with offices around the world, it comprises an integrated group of experts, industry leaders, academics, data scientists, and professionals working across borders and disciplines.
An emerging global regulatory landscape
Current policy is very much in its early stages, with different jurisdictions’ frameworks, guidelines, and requirements in varying stages of maturity—from the European Union’s risk-based AI Act and the Association of Southeast Asian Nations’ (ASEAN) business-friendly Guide on AI Governance and Ethics to President Biden’s executive order and state-and country-specific laws now taking shape.
Given this patchwork, business leaders view the effectiveness of current AI policy in different ways. Lawyers are far less confident in it than executives, as are North American respondents compared to those in Europe, the Middle East, and Africa (EMEA) and Asia-Pacific (APAC). Overall, only about one-third of more than 200 respondents from the Global AI Regulatory Report view today’s policies as ‘very effective’.
A key concern raised in BRG’s AI and the Future of Healthcare report, echoed by industry experts who were interviewed, is that regulators need to balance safety and efficacy with fostering innovation.
“AI’s potential in healthcare is just beginning to unfold, offering automation, improved patient experiences, and innovation for health systems of all sizes,” says Julie Coope, Associate Director, BRG, London. “However, these advantages come with cybersecurity risks and potential for compromised patient data. These are universal concerns and demand careful consideration. These challenges are further compounded by the global patchwork of evolving regulations that have failed to keep pace with technology adoption.”
Recommended next steps
• Engage with regulators: As the regulatory environment takes shape, healthcare executives have the responsibility and opportunity to work with lawmakers in developing a regulatory framework that balances innovation, efficacy, and safety.
• Develop a robust governance model: Many IT and security governance models are inadequate to manage AI development and deployment. A strong interdisciplinary governance model should be established that manages AI, innovation, and automation initiatives throughout the organization.
• Advance thoughtfully: AI can bring immense value, but tread carefully when it comes to dedicating significant resources in today’s uncertain regulatory environment. Build close relationships with technical leaders, internal AI experts, and vendors.
Key Findings
AI regulation is still emerging, and perceptions of its present effectiveness are mixed. About one-third of respondents believe current policy is ‘very effective’, but roughly the same proportion believe it is ‘moderately effective’ or ‘slightly effective’/’not effective’.
• Only four in ten are highly confident in their ability to comply with current regulation and guidance. Respondents cite lack of internal training and inadequate data management/security protocols as primary reasons.
• Less than half of all organisations have implemented internal safeguards to promote responsible and effective AI development and use. The highest proportion of organisations (45 per cent) have implemented data quality, collection, and storage reviews—as well as data protection, privacy, and security risk reviews. Less than one-third have implemented cross-functional teams to manage AI (31 per cent) or processes to mitigate biases and ensure ethical use (29 per cent).
• Data integrity, security, and accuracy/reliability are the three main focus areas for regulators and businesses. AI is only as good (or bad) as the underlying data. These were cited as main areas of compliance focus for organisations, as well as the most important for policymakers to address.
• Only 36 per cent of respondents feel strongly that future AI regulation will provide necessary guardrails. At the same time, more than half (57 per cent) expect “effective” AI policy within three years.
BRG’s inaugural global report analyses sentiment from top business leaders and policy analysts on their current effectiveness and confidence in complying with AI policies; predictions for the future of AI policy; and what guardrails are most necessary to balance innovation and security.
You can download and read BRG’s full report online at www.thinkbrg.com/airegulation
CONTACT INFORMATION