VatsalPatel18's picture
Can you create a webpage inforgraphic for this research work , you need to display the quantitiative information in perceptable way, just like audience sees a slide deck " Global Regulations for AI‑Enabled Medical Devices (July 2025) Introduction Artificial‑intelligence (AI) techniques underpin a new generation of medical technologies. Systems that interpret medical images, predict disease progression or provide clinical recommendations are increasingly deployed in hospitals and digital health platforms. Regulators must ensure such tools are safe and effective while fostering innovation. This report reviews the current state of regulation for AI‑enabled medical devices and Software as a Medical Device (SaMD) as of July 2025 in major jurisdictions around the world. It draws from government guidance, statutory instruments and recent policy developments and highlights quantitative information where available. Global Themes Risk‑based regulation and classification. Most jurisdictions apply a multi‑class framework based on the device’s intended use and patient risk. Low‑risk AI software may be exempt from local clinical trials or have simplified registration, while high‑risk tools (e.g., diagnostic or therapeutic systems) undergo rigorous pre‑market review, post‑market surveillance, and human‑oversight requirements. Life‑cycle and change management. Regulators increasingly recognise that AI models may change over time. The U.S. FDA’s Predetermined Change Control Plan and Japan’s post‑approval change protocol allow manufacturers to pre‑specify algorithm updates. Singapore’s Health Sciences Authority (HSA) requires versioning and traceability data and distinguishes between significant (major algorithm modifications, introduction of AI features) and non‑significant changes[1]. Saudi Arabia’s SFDA mandates notifying the authority within 10 days of significant changes and within 30 days of non‑significant changes[2]. Transparency, quality management and post‑market surveillance (PMS). There is convergence on the need to document training datasets, describe AI logic in clear language, conduct analytical and clinical validation, monitor performance, and manage cyber‑security. PMS obligations—such as timely reporting of adverse events, submission of Field Safety Corrective Action (FSCA) notifications and performance‑deviation monitoring—are explicitly required in Singapore[3], the EU, UK and Australia. China’s guidance on AI medical devices calls for continuous monitoring, system inspection and adverse‑event reporting[4]. Data protection and ethics. AI systems process sensitive health data. Many jurisdictions reference data‑protection regimes modeled on the EU GDPR. Qatar’s 2016 Personal Data Privacy Protection Law requires explicit consent to process “personal data of special nature” and sets strict limits on cross‑border data transfers[5]. The UAE’s Personal Data Protection Law allows processing of health data without consent for preventative medicine, medical diagnosis or public health purposes[6] but imposes data‑localisation requirements[7]. Emergence of generative AI regulation. Several countries (e.g., South Korea, UAE) have begun drafting guidelines for medical devices that use generative AI. South Korea’s MFDS formed a task force in January 2025 to develop the world’s first generative‑AI device guideline[8], while the UAE issued a non‑prescriptive GenAI adoption guide in 2023 that highlights privacy and language‑model limitations[9]. Regional and National Frameworks United States (FDA) The U.S. Food and Drug Administration (FDA) oversees AI‑enabled medical devices through existing pathways (510(k), De Novo and premarket approval). Devices must meet safety and effectiveness requirements and are evaluated using Good Machine Learning Practice (GMLP), Predetermined Change Control Plans (PCCPs) and transparency principles. AI‑enabled device list. As of 25 March 2025, the FDA had authorised 1 016 AI/ML‑enabled medical devices[10], most cleared via the 510(k) pathway[11]. Guiding principles. The joint GMLP principles (FDA–Health Canada–MHRA) stress multidisciplinary expertise, robust software engineering, representative datasets and continuous improvement[12]. PCCPs allow manufacturers to pre‑specify model modifications and performance bounds[13]. Transparency guidance emphasises clear explanation of intended use, performance and logic[14]. Adaptive algorithms. The FDA encourages PCCPs for AI/ML devices that continually learn and monitors post‑market data to ensure safety. Manufacturers must detail planned modifications, update protocols and risk‑control measures[13]. European Union (EU) The EU regulates medical devices via the Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR). In August 2024, the AI Act (Regulation (EU) 2024/1689) introduced a cross‑cutting framework for AI systems. Risk tiers. The AI Act divides systems into unacceptable, high‑risk, limited‑risk and minimal‑risk categories. AI safety components of products such as robot‑assisted surgery are deemed high‑risk and must undergo risk assessment, employ high‑quality datasets, maintain logs, produce technical documentation, ensure human oversight and meet robustness and cybersecurity requirements[15]. Transition timeline. Prohibitions and AI literacy obligations apply from 2 Feb 2025, rules for general‑purpose AI models from 2 Aug 2025, and high‑risk AI systems embedded in regulated products must comply by 2 Aug 2027[16]. Interplay with MDR/IVDR. Medical Device AI (MDAI) systems become high‑risk under the AI Act when the device itself is a medical device or safety component and is subject to third‑party conformity assessment[17]. Almost all MDR classes IIa, IIb, III and sterile class I devices require notified‑body involvement and thus become high‑risk AI systems[18]. The AI Act does not alter the MDR risk class but adds AI‑specific obligations[19]. Lifecycle management. Manufacturers must implement continuous monitoring, human oversight and post‑market surveillance proportionate to risk[20]. United Kingdom (MHRA) The UK continues to update its medical‑device regime following Brexit. Regulatory reforms. In June 2025 the MHRA joined the HealthAI Global Regulatory Network and introduced an AI Airlock sandbox to allow companies to test AI medical devices with regulators before National Health Service (NHS) deployment[21]. In July 2025 the MHRA proposed relying on approvals from regulators in the U.S., Canada and Australia to speed market access and confirmed indefinite recognition of CE‑marked devices while focusing the domestic UKCA route on first‑in‑market innovative technologies[22]. Guidance on software and AI as medical devices. The MHRA’s Software and AI as a Medical Device guidance (updated Feb 2025) clarifies classification, transparency and significant change control[23]. Draft post‑market surveillance regulations require robust PMS systems and periodic safety update reports[24]. CE‑mark recognition for some software devices has been extended until 30 June 2030[25], with new legislation expected in 2026[26]. Canada (Health Canada) Health Canada participates in joint guidance with the FDA and MHRA (GMLP, PCCPs, transparency). It operates a risk‑based framework via the Medical Devices Regulations and has authorised several AI‑based devices. (No new changes in 2025 beyond the joint guidance.) China (NMPA) China’s National Medical Products Administration (NMPA) issued Guidance for the Classification Defining of AI‑Based Medical Software Products (July 2021) and Guiding Principles for the Registration Review of Artificial Intelligence Medical Devices (Jan 2022). Classification. Devices are categorised by software security level (mild, moderate, severe) and risk. A risk‑based classification (Classes I–III) applies similarly to hardware devices[27]. The guidance defines AI medical devices as those that diagnose, manage or predict diseases using medical data[28]. Registration requirements. Manufacturers must submit a detailed description of the device, design and manufacturing information, risk‑benefit analysis and post‑market surveillance plan[29]. Clinical evaluation requires valid clinical association, analytical/technical validation and clinical validation[30]. The guidance emphasises data quality, transfer‑learning considerations, adverse‑event monitoring and recall procedures[4]. Binding status. MDS‑G010 guidance is largely non‑binding but derived from the Medical Device Law; some sanctions apply via the underlying law[31]. India (CDSCO) In India, AI‑enabled health‑care solutions fall under the Medical Device Rules 2017. The Central Drugs Standard Control Organisation (CDSCO) evaluates devices and issues licences. Classification. Devices are categorised into Class A (low risk), B (moderate), C (high) and D (very high). AI‑powered diagnostic tools usually fall under Classes B or C[32]. Regulatory process. Applicants must provide clinical investigation data, risk assessments and evidence of a quality management system before marketing[33]. Software performing medical functions without hardware is considered SaMD and must meet MDR standards[34]. Telemedicine and ethics. Telemedicine guidelines permit AI as an assistive tool but prohibit AI from practising medicine or prescribing drugs; licensed professionals remain responsible[35]. The Indian Council of Medical Research’s ethical guidelines emphasise data privacy, informed consent and integrity[36]. Data‑protection reforms include the Digital Personal Data Protection Act 2023 and draft rules (2025)[37]. Singapore (Health Sciences Authority – HSA) Singapore’s HSA regulates software and AI medical devices through the Regulatory Guidelines for Software Medical Devices – A Lifecycle Approach (2019, updated 2025). The guidelines emphasise traceability, labelling, change management and post‑market obligations. Regulated scope. Software used in medical applications—whether standalone, web‑based, embedded in a device, or deployed via mobile apps and artificial intelligence—is regulated as SaMD. These products must meet comprehensive requirements to ensure patient safety, cybersecurity and clinical effectiveness[1]. Labelling and versioning. SaMD must display the software version number and product owner information. For downloadable or web‑based software, version numbers must be shown on the splash screen or interface[38]. Clear versioning is required for traceability and is part of the registration dossier; versioning data should document changes in functionality, interface modifications and bug fixes[39]. Change management. The HSA distinguishes between significant changes (e.g., major algorithm modifications, introduction of AI features, interface redesigns impacting safety) and non‑significant changes. Significant changes require rigorous technical review and re‑authorisation[3]. Post‑market surveillance. Manufacturers, importers and registrants must report adverse events, submit FSCA notifications and monitor performance deviations[3]. Singapore emphasises timely reporting. Cybersecurity. Draft guidance requires SaMD registrations to include a documented cybersecurity strategy—secure‑by‑design architecture, threat modelling and vulnerability assessments, verification/validation testing and real‑time incident response[40]. AI‑specific issues. AI‑based devices must comply with data privacy laws (Personal Data Protection Act, Human Biomedical Research Act and Private Hospitals and Medical Clinics Act). Continuous‑learning models must address automation levels, model retraining and user override features, with ongoing performance monitoring and real‑world evidence collection[40]. Japan (PMDA) Japan’s Pharmaceuticals and Medical Devices Agency (PMDA) has adapted its framework to accommodate AI/ML medical devices. Evolution of regulation. From 2015–2020, the PMDA reorganised review sections and issued guidance for diagnostic software. A key milestone was adoption of a post‑approval change management protocol in September 2020—analogous to the FDA’s PCCP—allowing manufacturers to update AI algorithms without re‑approval[41]. Challenges. Regulators acknowledged unpredictable performance changes, consent for patient data use and assignment of responsibility for performance degradation[42]. Japan is developing evaluation criteria for generative AI and remains engaged in international harmonisation[43]. South Korea (MFDS) South Korea’s Digital Medical Products Act took effect on 24 Jan 2025. It complements the existing Medical Devices Act and addresses digital health technologies, including AI and digital therapeutics. Clinical trial reform. In Feb 2024 the MFDS relaxed clinical‑trial requirements for AI medical devices: low‑risk AI devices can now be registered without local clinical trials, and higher‑risk devices may conduct trials at non‑MFDS‑approved sites[44]. Previously, all AI devices required trials at certified institutions. This change aims to broaden AI adoption and accelerate research. Immediate Market Entry Medical Technology System. Launched Nov 2024, this fast‑track allows highly innovative devices (e.g., medical robots, digital therapeutics, AI diagnostic tools) to obtain approval within 80–140 days after clinical evaluation instead of up to 490 days. HIRA begins reimbursement review concurrently, reducing patient‑access delays. Products approved via this pathway undergo three years of post‑market surveillance[45]. Generative AI guidance. In Jan 2025, the MFDS formed a task force to develop the world’s first guideline for registration, review and approval of medical devices using generative AI[8]. The guideline will address risk factors and evaluation criteria. Cybersecurity and usability. The Digital Medical Device Electronic Intrusion Security Guidelines (Jan 2025) mandate data encryption, secure communication protocols, user‑access controls, real‑time vulnerability assessments and continuous monitoring. They define risk‑management responsibilities and incident‑response procedures[8]. New usability documentation requirements require manufacturers to demonstrate user‑centred design and document usability testing results[8]. Australia (TGA) Australia’s Therapeutic Goods Administration (TGA) updated its guidance for AI Medical Devices and Machine‑Learning Medical Devices in May 2024. Definition of AI medical device. The TGA considers an app, website, program or embedded package a medical device when its intended use is diagnosis, prevention, monitoring, prediction, prognosis or treatment of a disease or injury; alleviation or compensation for an injury or disability; investigation of anatomy or physiological processes; or control/support of conception[46]. Examples include mobile‑phone apps that diagnose melanoma and cloud analytics predicting patient deterioration[47]. Generative AI. The TGA notes that generative AI—which creates content such as text, images or treatment suggestions—remains nascent in med‑tech and is likely to pose regulatory challenges[48]. Alignment with FDA. Developers must provide clinical and technical evidence; generative AI and adaptive algorithms are expected to follow international best practices. Middle East Saudi Arabia (SFDA) The Saudi Food and Drug Authority published Guidance on Artificial Intelligence (AI) and Machine Learning (ML) Technologies Based Medical Devices in 2022, one of the most detailed AI‑device frameworks globally. Risk classification. Devices are classified into four levels (Class A–D). Marketing authorisation requires compliance with Global Harmonization Task Force (GHTF) member regulations and specific Saudi labelling and supply conditions[49]. The guidance clarifies that a product intended for investigation, detection, diagnosis, monitoring, treatment or management of a medical condition is deemed a medical device[50]. Examples include IVD software recognising cell types and AI‑based biosensors predicting disease probability[51]. Clinical evaluation. Manufacturers must demonstrate valid clinical association, analytical/technical validation and clinical validation[52]. Evidence may come from published literature, original research or guidelines; lacking that, manufacturers must conduct new clinical trials[53]. Analytical validation proves input correctness and output reliability, often using labelled reference datasets[54]. Clinical validation measures clinically meaningful outcomes and uses metrics such as sensitivity, specificity, PPV/NPV and likelihood ratios[55]. Risk management and QMS. Manufacturers must ensure that devices do not pose unacceptable risks and that benefits outweigh residual risks; risk management plans must include risk acceptability criteria and methods to evaluate cybersecurity and algorithmic errors[56]. Devices must be designed and manufactured under ISO 13485 quality‑management systems[57], and change notifications must be submitted within 10 days (significant) or 30 days (non‑significant)[2]. Both locked and adaptive algorithms are regulated[58]. Data protection. Saudi Arabia’s 2023 Personal Data Protection Law aligns with the GDPR and restricts cross‑border data transfers to jurisdictions with adequate protection[59]. Future regulations are expected to detail AI‑specific data‑processing requirements[60]. United Arab Emirates (UAE) The Ministry of Health and Prevention (MOHAP) regulates medical devices and software. Classification and registration. Devices require market authorisation from MOHAP and are classified into four risk levels (classes I, IIa, IIb, III) similar to EU MDR[61]. Registration is valid for five years and can be cancelled if product data changes[62]. Software not intended for diagnosis or treatment may be exempt under Federal Law No. 8 of 2019[63]. Abu Dhabi’s Department of Health requires AI governance, regular audits and compliance with UAE and DOH policies for AI use in healthcare[64]. Digital health and data. Dubai’s Ethical AI Guidelines (2021) encourage involving healthcare professionals in AI development[65]. Personal data is governed by the PDPL and Federal ICT Law, which allow processing of sensitive health data for medical purposes without consent but impose strict data‑localisation rules and limited cross‑border transfers[66]. AI and digital therapeutics. The UAE’s National AI Strategy 2031 promotes AI across sectors, including healthcare; the Babylon symptom‑assessment app’s scrutiny illustrates the need for approval[65]. Qatar Device import permits. Qatar does not require formal registration of medical devices; the Ministry of Economy and Commerce issues import permits, classifying devices based on EU MDR risk classes I–IV[67]. Data protection. Qatar’s Personal Data Privacy Protection Law 2016 (PDPPL) is modelled on the GDPR and classifies “personal data with special nature,” including health data[68]. Processing such data requires permission from the Compliance and Data Protection Department, and explicit consent is needed[69]. The PDPPL forbids measures that limit cross‑border data flows unless necessary to protect data[70]. Other Jurisdictions United Arab Emirates (continued): In Abu Dhabi, a 2018 AI policy for healthcare requires AI governance structures, protection of patient information, regular audits and compliance with regulatory requirements[64]. Digital Dubai’s ethical AI guidelines recommend involving healthcare professionals in AI design[65]. Qatar: Local research projects are developing guidelines for AI systems and certification processes in healthcare[71]. Qatar pioneered regional data‑protection laws and emphasises explicit consent and limited cross‑border data transfers[72]. Comparative Summary Analysis and Outlook Regulatory frameworks for AI‑enabled medical devices are converging on several principles: risk‑proportional oversight, lifecycle and change management, transparent documentation, robust clinical and analytical validation, and data‑protection compliance. Jurisdictions are moving from static approvals towards adaptive models that allow iterative improvements while safeguarding patients. Cooperation among regulators—illustrated by joint GMLP principles and international reliance pathways—seeks to harmonise standards and accelerate access to innovations. However, divergences remain in data‑localisation rules, consent requirements and the scope of AI‑specific obligations. The emergence of generative AI has prompted Korea and the UAE to draft bespoke guidelines, signalling future regulatory activity. Manufacturers should expect increasing emphasis on explainability, cyber‑security and post‑market evidence. Early engagement with regulators via sandboxes (e.g., UK’s AI Airlock) or fast‑track pathways (e.g., Korea’s Immediate Market Entry system) can facilitate deployment. Companies operating across borders must navigate differing data‑transfer rules and may need to implement federated learning or local hosting solutions to comply with strict data‑protection laws. Stakeholders in emerging markets should monitor evolving guidance, particularly in the Gulf states and Asia‑Pacific, where pioneering policies could influence global norms. [1] [3] [38] [39] [40] Software as a Medical Device in Singapore | Asia Actual, LLC https://asiaactual.com/singapore/software-as-a-medical-device/ [2] [5] [6] [7] [9] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [66] [67] [68] [69] [70] [71] [72] [74] [75] [76] Regulating AI in health in the Middle East: case studies from Qatar, Saudi Arabia and the United Arab Emirates - Research Handbook on Health, AI and the Law - NCBI Bookshelf https://www.ncbi.nlm.nih.gov/books/NBK613206/ [4] [27] Use of Artificial Intelligence in Healthcare Industry in Mainland China | Triage Health Law https://www.triagehealthlawblog.com/life-sciences/use-of-artificial-intelligence-in-healthcare-industry-in-mainland-china/ [8] [44] [45] South Korea: Recent Medical Device Registration Changes Reshape the Healthcare Landscape https://www.pacificbridgemedical.com/uncategorized/south-korea-recent-medical-device-registration-changes/ [10] [11] AI-Enabled Medical Devices: Transformation and Regulation https://www.mccarthy.ca/en/insights/blogs/techlex/ai-enabled-medical-devices-transformation-and-regulation [12] Good Machine Learning Practice for Medical Device Development: Guiding Principles | FDA https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles [13] Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles | FDA https://www.fda.gov/medical-devices/software-medical-device-samd/predetermined-change-control-plans-machine-learning-enabled-medical-devices-guiding-principles [14] Transparency for Machine Learning-Enabled Medical Devices: Guiding Principles | FDA https://www.fda.gov/medical-devices/software-medical-device-samd/transparency-machine-learning-enabled-medical-devices-guiding-principles [15] [16] AI Act | Shaping Europe’s digital future https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai [17] [18] [19] [20] b78a17d7-e3cd-4943-851d-e02a2f22bbb4_en https://health.ec.europa.eu/document/download/b78a17d7-e3cd-4943-851d-e02a2f22bbb4_en [21] UK MHRA leads safe use of AI in healthcare as first country in new global network - GOV.UK https://www.gov.uk/government/news/uk-mhra-leads-safe-use-of-ai-in-healthcare-as-first-country-in-new-global-network [22] MHRA announces proposals to improve access to world’s best medical devices for patients and to boost economic growth in Britain’s med tech sector - GOV.UK https://www.gov.uk/government/news/mhra-announces-proposals-to-improve-access-to-worlds-best-medical-devices-for-patients-and-to-boost-economic-growth-in-britains-med-tech-sector [23] Software and artificial intelligence (AI) as a medical device - GOV.UK https://www.gov.uk/government/publications/software-and-artificial-intelligence-ai-as-a-medical-device/software-and-artificial-intelligence-ai-as-a-medical-device [24] [25] [26] Update on regulatory change for software-based medical devices https://www.penningtonslaw.com/news-publications/latest-news/2025/update-on-regulatory-changes-for-software-based-medical-devices [28] [29] [30] [31] Regulating AI-Based Medical Devices in Saudi Arabia: New Legal Paradigms in an Evolving Global Legal Order - PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC11250741/ [32] [33] [34] [35] [36] [37] [73] Artificial-Intelligence-in-Healthcare.pdf https://www.nishithdesai.com/fileadmin/user_upload/pdfs/Research_Papers/Artificial-Intelligence-in-Healthcare.pdf [41] [42] Delivering on the Promise of Medical AI: Can Japan’s Regulators Keep Pace? (Part 1) | Research | The Tokyo Foundation https://www.tokyofoundation.org/research/detail.php [43] The Evolving Regulatory Paradigm of AI in MedTech: A Review of Perspectives and Where We Are Today - PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC11043174/ [46] [47] [48] Artificial Intelligence Medical Device Software in Australia https://asiaactual.com/blog/artificial-intelligence-medical-device-software-in-australia/ [61] [62] [63] [64] [65] Regulation of AI-Driven HealthTech in the UAE - Ronin Legal https://roninlegalconsulting.com/regulation-of-ai-driven-healthcare-applications-in-the-uae/" - Initial Deployment
5302f01 verified