Of experts interviewed, 75% cited poor data foundation as the primary barrier to AI value realization in healthcare. Algorithms must be trained on data, yet even in high-income countries, inconsistent data collection and lack of interoperability impede the scaling of AI models across organizations and borders. Experts also cited AI’s vulnerability to misuse and abuse through privacy invasion, data theft and sub-par algorithms perpetuating bias. Innovative data sharing models have shown promise, including federated learning architectures such as with India’s National Digital Health Mission (NDHM), or alternatives, for instance proprietary methods from private sector players such as TripleBlind. Almost half of the experts interviewed attributed AI’s slow value realization to the private sector’s focus on experimentation at the expense of scalability and a lack of impetus to move innovations across borders. While many leaders lauded pilots their organizations’ innovation labs have launched, these same leaders struggled to find clear examples of scaled AI applications emerging from these same initiatives. Reasons for this disconnect include a lack of incentives to scale solutions beyond high-income countries – where public and private funding are more plentiful – and policy differences that inhibit the same. These advances won’t benefit the tens of millions of people who need them if there is no greater focus on designing tools with cross-border transferability as a primary goal. A handful of those interviewed cited technological infrastructure shortcomings for the slow pace of value realization. While mobile phones are widespread in low- and middle-income countries, just 50% of these countries’ populations have access to mobile internet – a significant impediment to the data exchange required for widespread AI adoption.28 Beyond consumer broadband and mobile access, leaders agreed that government IT and data infrastructure are badly in need of upgrades, and even the IT infrastructure of private organizations (e.g. electronic health records which can be primarily paper-based in many countries) is not ready for the pace of AI’s advances, with continued need for stronger cloud infrastructure. However, most agreed that upstream issues, such as low trust and lacking an adequate data foundation, represent the largest barriers to realizing value. Of the experts interviewed, around 80% cited low trust as a significant barrier to realizing the value of AI in healthcare. Drivers of low trust include algorithms that don’t integrate with existing workflows, unease with AI replacing clinicians, potential bias in data sets used to train algorithms and lack of transparency into how algorithms perform across diverse populations. Key to building trust, and thus adoption, is empowering a central body or bodies to certify that algorithms have been developed transparently, are free from bias and perform as advertised. The US Food and Drug Administration’s (FDA) process for reviewing AI-enabled medical devices has evolved, but experts agreed that stronger guardrails are needed to increase transparency once algorithms are approved. The innovations described here promise to transform global health and healthcare, but only if their value can be realized. Public and private sector leaders identified four broad barriers to the full-scale transformation of healthcare through AI. Holes in the data foundation Lack of scalability and cooperation Inadequate technological infrastructure Limited trust and adoption Will AI replace humans or act as a partner? Although we may eventually trust what AI tells us, we are not there yet. And in healthcare, decisionmaking is too complex, and the stakes are too high for us to get there anytime soon. AI will be a partner for the foreseeable future. Nassar Nizami, Executive Vice-President, Chief Information and Digital Officer, Thomas Jefferson University and Jefferson Health Scaling Smart Solutions with AI in Health: Unlocking Impact on High-Potential Use Cases 25
RkJQdWJsaXNoZXIy NzQwMjQ=