Globally, using AI in well being has proven some promise in particular domains corresponding to picture recognition in radiology, analytics to assist prognosis in managed environments, and workflow help. However systematic evaluations repeatedly present that instruments which apparently carry out nicely in pilot settings, flounder in real-world contexts. AI is sweet at recognising and matching patterns, however healthcare is far more than sample recognition — because it includes advanced medical and moral judgements, social contextualisation of sufferers together with rationalization and reassurance, and direct bodily caring — all of which contain human relationships, not simply algorithms.
Crucial to guard rights
The Delhi session raised sharp issues about digital extractivism: who owns well being information, who advantages from derived intelligence, and who bears the dangers? Sufferers want comprehensible narratives and empowerment, not simply being handled as sources of knowledge. And if AI instruments are educated totally on city, digitised populations, these might entrench caste, gender, regional, and socio-economic bias. Therefore any use of AI in well being should be anchored in a strongly rights-based framework. This consists of the precise to grasp since sufferers and folks should not merely entry their well being information; they have to be capable to understand it. AI techniques ought to translate advanced medical data into clear, related explanations which help knowledgeable choices. The best to native processing signifies that delicate well being information ought to by default be processed regionally wherever doable, relatively than being centralised in company or state-controlled servers; cloud sharing should be express and revocable. The best to ongoing management implies that consent can’t be a one-time formality; people should be capable to withdraw entry to their information, and will management not simply their information, but in addition insights generated from it. The best to fairness and entry signifies that AI techniques should be audited for bias, being made accessible throughout areas and languages, and ruled transparently to make sure that they cut back and never deepen well being inequalities. AI-supported providers developed with public assets must be accessible free on the level of use inside public well being techniques. Non-exclusion should be assured: nobody must be denied care as a result of they don’t have interaction with AI techniques; non-AI pathways in healthcare should all the time stay accessible and viable.
Supplementarity to human care
A core precept is that AI should complement, not substitute, human care. AI would possibly help documentation and information interpretation, however choices in healthcare should stay with accountable human suppliers. People should all the time be within the loop for all AI-assisted features, preserving in view that well being staff and professionals are the spine of care. In well being techniques already marked by precarious labour situations, there’s a actual danger that AI will change into a justification for employees reductions, casualisation, elevated workloads, or algorithmic surveillance of ASHAs and different frontline staff. Therefore approval of AI instruments ought to require labour influence assessments, making certain explainability for frontline staff, and express ensures in opposition to workforce discount. Any technological positive aspects should improve the capability and dignity of well being staff — not displace them.
Political economic system of AI
The essential query isn’t whether or not AI might help, however who will AI serve. Present use of AI isn’t impartial; it’s largely embedded in monopolistic profit-driven fashions. If deployed via industrial platforms which centralise affected person information, AI dangers deepening corporatisation, creating an elite layer of care, and getting used to develop high-cost market growth relatively than rational entry. If public information and public funds construct AI techniques, their main obligation should be to strengthen public provisioning — not subsidising company earnings.
Any use of AI in India should be grounded in a well being techniques strategy. AI may be judiciously deployed to strengthen main and preventive care, and to empower sufferers, together with help in rational drug use, enhancing referral techniques, demystifying hospital billing, or simplifying medical data for customers. However we should do not forget that India’s well being system challenges are usually not primarily technical; they’re political, financial and structural, together with persistent underinvestment in public well being, shortages of educated personnel, insufficient regulation of business healthcare, and excessive out-of-pocket expenditure. These are institutional failures which algorithms won’t repair.
To conclude, we should always not count on know-how to offer options to what are principally coverage and systemic issues (often known as ‘techno-solutionism’). Like all know-how, AI should serve sufferers’ rights, well being fairness and public goal, whereas well being staff and professionals stay the spine of care. Well being information should above all belong to sufferers and folks, and any derived intelligence should be accountable to them. Whereas shaping the way forward for Indian healthcare, AI and varied applied sciences can present help — however individuals, well being staff and public well being should stay firmly on the centre.
Dr. Abhay Shukla is a public well being doctor and nationwide co-convenor of Jan Swasthya Abhiyan. Views expressed are private. He acknowledges Surajit Nundy, the RAXA crew and members for the Folks-Led AI in Well being session for his or her priceless concepts which knowledgeable this text.
Printed – February 20, 2026 12:34 am IST











Leave a Reply