Ctrl+K

Can you create a webpage inforgraphic for this research work , you need to display the quantitiative information in perceptable way, just like audience sees a slide deck " Global Regulations for AIâEnabled Medical Devices (July 2025) Introduction Artificialâintelligence (AI) techniques underpin a new generation of medical technologies. Systems that interpret medical images, predict disease progression or provide clinical recommendations are increasingly deployed in hospitals and digital health platforms. Regulators must ensure such tools are safe and effective while fostering innovation. This report reviews the current state of regulation for AIâenabled medical devices and Software as a Medical Device (SaMD) as of July 2025 in major jurisdictions around the world. It draws from government guidance, statutory instruments and recent policy developments and highlights quantitative information where available. Global Themes Riskâbased regulation and classification. Most jurisdictions apply a multiâclass framework based on the deviceâs intended use and patient risk. Lowârisk AI software may be exempt from local clinical trials or have simplified registration, while highârisk tools (e.g., diagnostic or therapeutic systems) undergo rigorous preâmarket review, postâmarket surveillance, and humanâoversight requirements. Lifeâcycle and change management. Regulators increasingly recognise that AI models may change over time. The U.S. FDAâs Predetermined Change Control Plan and Japanâs postâapproval change protocol allow manufacturers to preâspecify algorithm updates. Singaporeâs Health Sciences Authority (HSA) requires versioning and traceability data and distinguishes between significant (major algorithm modifications, introduction of AI features) and nonâsignificant changes[1]. Saudi Arabiaâs SFDA mandates notifying the authority within 10 days of significant changes and within 30 days of nonâsignificant changes[2]. Transparency, quality management and postâmarket surveillance (PMS). There is convergence on the need to document training datasets, describe AI logic in clear language, conduct analytical and clinical validation, monitor performance, and manage cyberâsecurity. PMS obligationsâsuch as timely reporting of adverse events, submission of Field Safety Corrective Action (FSCA) notifications and performanceâdeviation monitoringâare explicitly required in Singapore[3], the EU, UK and Australia. Chinaâs guidance on AI medical devices calls for continuous monitoring, system inspection and adverseâevent reporting[4]. Data protection and ethics. AI systems process sensitive health data. Many jurisdictions reference dataâprotection regimes modeled on the EU GDPR. Qatarâs 2016 Personal Data Privacy Protection Law requires explicit consent to process âpersonal data of special natureâ and sets strict limits on crossâborder data transfers[5]. The UAEâs Personal Data Protection Law allows processing of health data without consent for preventative medicine, medical diagnosis or public health purposes[6] but imposes dataâlocalisation requirements[7]. Emergence of generative AI regulation. Several countries (e.g., South Korea, UAE) have begun drafting guidelines for medical devices that use generative AI. South Koreaâs MFDS formed a task force in January 2025 to develop the worldâs first generativeâAI device guideline[8], while the UAE issued a nonâprescriptive GenAI adoption guide in 2023 that highlights privacy and languageâmodel limitations[9]. Regional and National Frameworks United States (FDA) The U.S. Food and Drug Administration (FDA) oversees AIâenabled medical devices through existing pathways (510(k), De Novo and premarket approval). Devices must meet safety and effectiveness requirements and are evaluated using Good Machine Learning Practice (GMLP), Predetermined Change Control Plans (PCCPs) and transparency principles. AIâenabled device list. As of 25 March 2025, the FDA had authorised 1âŻ016 AI/MLâenabled medical devices[10], most cleared via the 510(k) pathway[11]. Guiding principles. The joint GMLP principles (FDAâHealth CanadaâMHRA) stress multidisciplinary expertise, robust software engineering, representative datasets and continuous improvement[12]. PCCPs allow manufacturers to preâspecify model modifications and performance bounds[13]. Transparency guidance emphasises clear explanation of intended use, performance and logic[14]. Adaptive algorithms. The FDA encourages PCCPs for AI/ML devices that continually learn and monitors postâmarket data to ensure safety. Manufacturers must detail planned modifications, update protocols and riskâcontrol measures[13]. European Union (EU) The EU regulates medical devices via the Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR). In August 2024, the AI Act (Regulation (EU) 2024/1689) introduced a crossâcutting framework for AI systems. Risk tiers. The AI Act divides systems into unacceptable, highârisk, limitedârisk and minimalârisk categories. AI safety components of products such as robotâassisted surgery are deemed highârisk and must undergo risk assessment, employ highâquality datasets, maintain logs, produce technical documentation, ensure human oversight and meet robustness and cybersecurity requirements[15]. Transition timeline. Prohibitions and AI literacy obligations apply from 2 Feb 2025, rules for generalâpurpose AI models from 2 Aug 2025, and highârisk AI systems embedded in regulated products must comply by 2 Aug 2027[16]. Interplay with MDR/IVDR. Medical Device AI (MDAI) systems become highârisk under the AI Act when the device itself is a medical device or safety component and is subject to thirdâparty conformity assessment[17]. Almost all MDR classes IIa, IIb, III and sterile class I devices require notifiedâbody involvement and thus become highârisk AI systems[18]. The AI Act does not alter the MDR risk class but adds AIâspecific obligations[19]. Lifecycle management. Manufacturers must implement continuous monitoring, human oversight and postâmarket surveillance proportionate to risk[20]. United Kingdom (MHRA) The UK continues to update its medicalâdevice regime following Brexit. Regulatory reforms. In June 2025 the MHRA joined the HealthAI Global Regulatory Network and introduced an AI Airlock sandbox to allow companies to test AI medical devices with regulators before National Health Service (NHS) deployment[21]. In July 2025 the MHRA proposed relying on approvals from regulators in the U.S., Canada and Australia to speed market access and confirmed indefinite recognition of CEâmarked devices while focusing the domestic UKCA route on firstâinâmarket innovative technologies[22]. Guidance on software and AI as medical devices. The MHRAâs Software and AI as a Medical Device guidance (updated Feb 2025) clarifies classification, transparency and significant change control[23]. Draft postâmarket surveillance regulations require robust PMS systems and periodic safety update reports[24]. CEâmark recognition for some software devices has been extended until 30 June 2030[25], with new legislation expected in 2026[26]. Canada (Health Canada) Health Canada participates in joint guidance with the FDA and MHRA (GMLP, PCCPs, transparency). It operates a riskâbased framework via the Medical Devices Regulations and has authorised several AIâbased devices. (No new changes in 2025 beyond the joint guidance.) China (NMPA) Chinaâs National Medical Products Administration (NMPA) issued Guidance for the Classification Defining of AIâBased Medical Software Products (July 2021) and Guiding Principles for the Registration Review of Artificial Intelligence Medical Devices (Jan 2022). Classification. Devices are categorised by software security level (mild, moderate, severe) and risk. A riskâbased classification (Classes IâIII) applies similarly to hardware devices[27]. The guidance defines AI medical devices as those that diagnose, manage or predict diseases using medical data[28]. Registration requirements. Manufacturers must submit a detailed description of the device, design and manufacturing information, riskâbenefit analysis and postâmarket surveillance plan[29]. Clinical evaluation requires valid clinical association, analytical/technical validation and clinical validation[30]. The guidance emphasises data quality, transferâlearning considerations, adverseâevent monitoring and recall procedures[4]. Binding status. MDSâG010 guidance is largely nonâbinding but derived from the Medical Device Law; some sanctions apply via the underlying law[31]. India (CDSCO) In India, AIâenabled healthâcare solutions fall under the Medical Device Rules 2017. The Central Drugs Standard Control Organisation (CDSCO) evaluates devices and issues licences. Classification. Devices are categorised into Class A (low risk), B (moderate), C (high) and D (very high). AIâpowered diagnostic tools usually fall under Classes B or C[32]. Regulatory process. Applicants must provide clinical investigation data, risk assessments and evidence of a quality management system before marketing[33]. Software performing medical functions without hardware is considered SaMD and must meet MDR standards[34]. Telemedicine and ethics. Telemedicine guidelines permit AI as an assistive tool but prohibit AI from practising medicine or prescribing drugs; licensed professionals remain responsible[35]. The Indian Council of Medical Researchâs ethical guidelines emphasise data privacy, informed consent and integrity[36]. Dataâprotection reforms include the Digital Personal Data Protection Act 2023 and draft rules (2025)[37]. Singapore (Health Sciences Authority â HSA) Singaporeâs HSA regulates software and AI medical devices through the Regulatory Guidelines for Software Medical Devices â A Lifecycle Approach (2019, updated 2025). The guidelines emphasise traceability, labelling, change management and postâmarket obligations. Regulated scope. Software used in medical applicationsâwhether standalone, webâbased, embedded in a device, or deployed via mobile apps and artificial intelligenceâis regulated as SaMD. These products must meet comprehensive requirements to ensure patient safety, cybersecurity and clinical effectiveness[1]. Labelling and versioning. SaMD must display the software version number and product owner information. For downloadable or webâbased software, version numbers must be shown on the splash screen or interface[38]. Clear versioning is required for traceability and is part of the registration dossier; versioning data should document changes in functionality, interface modifications and bug fixes[39]. Change management. The HSA distinguishes between significant changes (e.g., major algorithm modifications, introduction of AI features, interface redesigns impacting safety) and nonâsignificant changes. Significant changes require rigorous technical review and reâauthorisation[3]. Postâmarket surveillance. Manufacturers, importers and registrants must report adverse events, submit FSCA notifications and monitor performance deviations[3]. Singapore emphasises timely reporting. Cybersecurity. Draft guidance requires SaMD registrations to include a documented cybersecurity strategyâsecureâbyâdesign architecture, threat modelling and vulnerability assessments, verification/validation testing and realâtime incident response[40]. AIâspecific issues. AIâbased devices must comply with data privacy laws (Personal Data Protection Act, Human Biomedical Research Act and Private Hospitals and Medical Clinics Act). Continuousâlearning models must address automation levels, model retraining and user override features, with ongoing performance monitoring and realâworld evidence collection[40]. Japan (PMDA) Japanâs Pharmaceuticals and Medical Devices Agency (PMDA) has adapted its framework to accommodate AI/ML medical devices. Evolution of regulation. From 2015â2020, the PMDA reorganised review sections and issued guidance for diagnostic software. A key milestone was adoption of a postâapproval change management protocol in September 2020âanalogous to the FDAâs PCCPâallowing manufacturers to update AI algorithms without reâapproval[41]. Challenges. Regulators acknowledged unpredictable performance changes, consent for patient data use and assignment of responsibility for performance degradation[42]. Japan is developing evaluation criteria for generative AI and remains engaged in international harmonisation[43]. South Korea (MFDS) South Koreaâs Digital Medical Products Act took effect on 24 Jan 2025. It complements the existing Medical Devices Act and addresses digital health technologies, including AI and digital therapeutics. Clinical trial reform. In Feb 2024 the MFDS relaxed clinicalâtrial requirements for AI medical devices: lowârisk AI devices can now be registered without local clinical trials, and higherârisk devices may conduct trials at nonâMFDSâapproved sites[44]. Previously, all AI devices required trials at certified institutions. This change aims to broaden AI adoption and accelerate research. Immediate Market Entry Medical Technology System. Launched Nov 2024, this fastâtrack allows highly innovative devices (e.g., medical robots, digital therapeutics, AI diagnostic tools) to obtain approval within 80â140 days after clinical evaluation instead of up to 490 days. HIRA begins reimbursement review concurrently, reducing patientâaccess delays. Products approved via this pathway undergo three years of postâmarket surveillance[45]. Generative AI guidance. In Jan 2025, the MFDS formed a task force to develop the worldâs first guideline for registration, review and approval of medical devices using generative AI[8]. The guideline will address risk factors and evaluation criteria. Cybersecurity and usability. The Digital Medical Device Electronic Intrusion Security Guidelines (Jan 2025) mandate data encryption, secure communication protocols, userâaccess controls, realâtime vulnerability assessments and continuous monitoring. They define riskâmanagement responsibilities and incidentâresponse procedures[8]. New usability documentation requirements require manufacturers to demonstrate userâcentred design and document usability testing results[8]. Australia (TGA) Australiaâs Therapeutic Goods Administration (TGA) updated its guidance for AI Medical Devices and MachineâLearning Medical Devices in May 2024. Definition of AI medical device. The TGA considers an app, website, program or embedded package a medical device when its intended use is diagnosis, prevention, monitoring, prediction, prognosis or treatment of a disease or injury; alleviation or compensation for an injury or disability; investigation of anatomy or physiological processes; or control/support of conception[46]. Examples include mobileâphone apps that diagnose melanoma and cloud analytics predicting patient deterioration[47]. Generative AI. The TGA notes that generative AIâwhich creates content such as text, images or treatment suggestionsâremains nascent in medâtech and is likely to pose regulatory challenges[48]. Alignment with FDA. Developers must provide clinical and technical evidence; generative AI and adaptive algorithms are expected to follow international best practices. Middle East Saudi Arabia (SFDA) The Saudi Food and Drug Authority published Guidance on Artificial Intelligence (AI) and Machine Learning (ML) Technologies Based Medical Devices in 2022, one of the most detailed AIâdevice frameworks globally. Risk classification. Devices are classified into four levels (Class AâD). Marketing authorisation requires compliance with Global Harmonization Task Force (GHTF) member regulations and specific Saudi labelling and supply conditions[49]. The guidance clarifies that a product intended for investigation, detection, diagnosis, monitoring, treatment or management of a medical condition is deemed a medical device[50]. Examples include IVD software recognising cell types and AIâbased biosensors predicting disease probability[51]. Clinical evaluation. Manufacturers must demonstrate valid clinical association, analytical/technical validation and clinical validation[52]. Evidence may come from published literature, original research or guidelines; lacking that, manufacturers must conduct new clinical trials[53]. Analytical validation proves input correctness and output reliability, often using labelled reference datasets[54]. Clinical validation measures clinically meaningful outcomes and uses metrics such as sensitivity, specificity, PPV/NPV and likelihood ratios[55]. Risk management and QMS. Manufacturers must ensure that devices do not pose unacceptable risks and that benefits outweigh residual risks; risk management plans must include risk acceptability criteria and methods to evaluate cybersecurity and algorithmic errors[56]. Devices must be designed and manufactured under ISO 13485 qualityâmanagement systems[57], and change notifications must be submitted within 10 days (significant) or 30 days (nonâsignificant)[2]. Both locked and adaptive algorithms are regulated[58]. Data protection. Saudi Arabiaâs 2023 Personal Data Protection Law aligns with the GDPR and restricts crossâborder data transfers to jurisdictions with adequate protection[59]. Future regulations are expected to detail AIâspecific dataâprocessing requirements[60]. United Arab Emirates (UAE) The Ministry of Health and Prevention (MOHAP) regulates medical devices and software. Classification and registration. Devices require market authorisation from MOHAP and are classified into four risk levels (classes I, IIa, IIb, III) similar to EU MDR[61]. Registration is valid for five years and can be cancelled if product data changes[62]. Software not intended for diagnosis or treatment may be exempt under Federal Law No. 8 of 2019[63]. Abu Dhabiâs Department of Health requires AI governance, regular audits and compliance with UAE and DOH policies for AI use in healthcare[64]. Digital health and data. Dubaiâs Ethical AI Guidelines (2021) encourage involving healthcare professionals in AI development[65]. Personal data is governed by the PDPL and Federal ICT Law, which allow processing of sensitive health data for medical purposes without consent but impose strict dataâlocalisation rules and limited crossâborder transfers[66]. AI and digital therapeutics. The UAEâs National AI Strategy 2031 promotes AI across sectors, including healthcare; the Babylon symptomâassessment appâs scrutiny illustrates the need for approval[65]. Qatar Device import permits. Qatar does not require formal registration of medical devices; the Ministry of Economy and Commerce issues import permits, classifying devices based on EU MDR risk classes IâIV[67]. Data protection. Qatarâs Personal Data Privacy Protection Law 2016 (PDPPL) is modelled on the GDPR and classifies âpersonal data with special nature,â including health data[68]. Processing such data requires permission from the Compliance and Data Protection Department, and explicit consent is needed[69]. The PDPPL forbids measures that limit crossâborder data flows unless necessary to protect data[70]. Other Jurisdictions United Arab Emirates (continued): In Abu Dhabi, a 2018 AI policy for healthcare requires AI governance structures, protection of patient information, regular audits and compliance with regulatory requirements[64]. Digital Dubaiâs ethical AI guidelines recommend involving healthcare professionals in AI design[65]. Qatar: Local research projects are developing guidelines for AI systems and certification processes in healthcare[71]. Qatar pioneered regional dataâprotection laws and emphasises explicit consent and limited crossâborder data transfers[72]. Comparative Summary Analysis and Outlook Regulatory frameworks for AIâenabled medical devices are converging on several principles: riskâproportional oversight, lifecycle and change management, transparent documentation, robust clinical and analytical validation, and dataâprotection compliance. Jurisdictions are moving from static approvals towards adaptive models that allow iterative improvements while safeguarding patients. Cooperation among regulatorsâillustrated by joint GMLP principles and international reliance pathwaysâseeks to harmonise standards and accelerate access to innovations. However, divergences remain in dataâlocalisation rules, consent requirements and the scope of AIâspecific obligations. The emergence of generative AI has prompted Korea and the UAE to draft bespoke guidelines, signalling future regulatory activity. Manufacturers should expect increasing emphasis on explainability, cyberâsecurity and postâmarket evidence. Early engagement with regulators via sandboxes (e.g., UKâs AI Airlock) or fastâtrack pathways (e.g., Koreaâs Immediate Market Entry system) can facilitate deployment. Companies operating across borders must navigate differing dataâtransfer rules and may need to implement federated learning or local hosting solutions to comply with strict dataâprotection laws. Stakeholders in emerging markets should monitor evolving guidance, particularly in the Gulf states and AsiaâPacific, where pioneering policies could influence global norms. [1] [3] [38] [39] [40] Software as a Medical Device in Singapore | Asia Actual, LLC https://asiaactual.com/singapore/software-as-a-medical-device/ [2] [5] [6] [7] [9] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [66] [67] [68] [69] [70] [71] [72] [74] [75] [76] Regulating AI in health in the Middle East: case studies from Qatar, Saudi Arabia and the United Arab Emirates - Research Handbook on Health, AI and the Law - NCBI Bookshelf https://www.ncbi.nlm.nih.gov/books/NBK613206/ [4] [27] Use of Artificial Intelligence in Healthcare Industry in Mainland China | Triage Health Law https://www.triagehealthlawblog.com/life-sciences/use-of-artificial-intelligence-in-healthcare-industry-in-mainland-china/ [8] [44] [45] South Korea: Recent Medical Device Registration Changes Reshape the Healthcare Landscape https://www.pacificbridgemedical.com/uncategorized/south-korea-recent-medical-device-registration-changes/ [10] [11] AI-Enabled Medical Devices: Transformation and Regulation https://www.mccarthy.ca/en/insights/blogs/techlex/ai-enabled-medical-devices-transformation-and-regulation [12] Good Machine Learning Practice for Medical Device Development: Guiding Principles | FDA https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles [13] Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles | FDA https://www.fda.gov/medical-devices/software-medical-device-samd/predetermined-change-control-plans-machine-learning-enabled-medical-devices-guiding-principles [14] Transparency for Machine Learning-Enabled Medical Devices: Guiding Principles | FDA https://www.fda.gov/medical-devices/software-medical-device-samd/transparency-machine-learning-enabled-medical-devices-guiding-principles [15] [16] AI Act | Shaping Europeâs digital future https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai [17] [18] [19] [20] b78a17d7-e3cd-4943-851d-e02a2f22bbb4_en https://health.ec.europa.eu/document/download/b78a17d7-e3cd-4943-851d-e02a2f22bbb4_en [21] UK MHRA leads safe use of AI in healthcare as first country in new global network - GOV.UK https://www.gov.uk/government/news/uk-mhra-leads-safe-use-of-ai-in-healthcare-as-first-country-in-new-global-network [22] MHRA announces proposals to improve access to worldâs best medical devices for patients and to boost economic growth in Britainâs med tech sector - GOV.UK https://www.gov.uk/government/news/mhra-announces-proposals-to-improve-access-to-worlds-best-medical-devices-for-patients-and-to-boost-economic-growth-in-britains-med-tech-sector [23] Software and artificial intelligence (AI) as a medical device - GOV.UK https://www.gov.uk/government/publications/software-and-artificial-intelligence-ai-as-a-medical-device/software-and-artificial-intelligence-ai-as-a-medical-device [24] [25] [26] Update on regulatory change for software-based medical devices https://www.penningtonslaw.com/news-publications/latest-news/2025/update-on-regulatory-changes-for-software-based-medical-devices [28] [29] [30] [31] Regulating AI-Based Medical Devices in Saudi Arabia: New Legal Paradigms in an Evolving Global Legal Order - PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC11250741/ [32] [33] [34] [35] [36] [37] [73] Artificial-Intelligence-in-Healthcare.pdf https://www.nishithdesai.com/fileadmin/user_upload/pdfs/Research_Papers/Artificial-Intelligence-in-Healthcare.pdf [41] [42] Delivering on the Promise of Medical AI: Can Japanâs Regulators Keep Pace? (Part 1) | Research | The Tokyo Foundation https://www.tokyofoundation.org/research/detail.php [43] The Evolving Regulatory Paradigm of AI in MedTech: A Review of Perspectives and Where We Are Today - PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC11043174/ [46] [47] [48] Artificial Intelligence Medical Device Software in Australia https://asiaactual.com/blog/artificial-intelligence-medical-device-software-in-australia/ [61] [62] [63] [64] [65] Regulation of AI-Driven HealthTech in the UAE - Ronin Legal https://roninlegalconsulting.com/regulation-of-ai-driven-healthcare-applications-in-the-uae/" - Initial Deployment
5302f01
verified