fulltext
stringlengths
0
6.01M
observational pilot study following the same entry and exclusion criteria as for the randomized trial and undergoing the same procedures. The main study is an open label, randomised, controlled trial with two parallel arms of standard of care (control arm) versus standard of care with 10 days of chloroquine (intervention arm) with a loading dose over the first 24 hours, followed by 300mg base orally once daily for nine days. The study will recruit patients in three sites in Background Coronaviruses (CoVs) are positive-sense single stranded enveloped ribonucleic acid (RNA) viruses, many of which are commonly found in humans and cause mild symptoms. Over the past two decades, emerging pathogenic CoVs capable of causing life-threatening disease in animals and humans have been identified, namely swine acute diarrhoea syndrome coronavirus (SADS-CoV), severe acute respiratory syndrome coronavirus (SARS-CoV), and Middle Eastern respiratory syndrome coronavirus (MERS-CoV) and 1-3. In December 2019 the Wuhan Municipal Health Committee identified an outbreak of viral pneumonia cases of unknown cause. Coronavirus RNA was quickly identified in some of these patients 4 . This novel coronavirus has subsequently been named as SARS-COV-2 and has 89% nucleotide identity with bat SARS-like-CoVZXC21 and 82% with that of human SARS-CoV 5 . The disease caused by this virus has been designated coronavirus disease 2019 . SARS-CoV-2 has spread rapidly following its initial identification in Wuhan, Hubei Province, China 6 . On January 5, 2020 there were 59 confirmed cases. As of 5th May 2020, the SARS-CoV-2 pandemic has resulted in more than 3,660,055 confirmed infections globally, with disease reported in over 210 countries, and more than 252,675 deaths. The crude global mortality is currently around 3.4%, significantly greater than that reported for seasonal influenza, which affects up to 1 billion people each year and causes between 290,000 and 650,000 deaths 7 . Disease is reported from the majority of countries in the Asia-Pacific region, with large numbers affected in South Korea (<8000), and disease also confirmed in Vietnam, Thailand, Singapore, Malaysia, Philippines and Indonesia. Outside the Asia-Pacific region, exponential growth in the number of cases is seen in most European countries, notably Italy, Spain, Germany, France, Switzerland and the United Kingdom. Similar patterns of spread are seen in the Americas, with the USA now reporting more than 1,212,955 cases 7 . The main route of spread of COVID-19 is believed to be through respiratory droplets; however, other routes including the faeco-oral transmission route and fomites may be important 6 . Currently there is no proven effective prophylaxis, treatment or vaccine. The estimated COVID-19 basic reproductive ratio (R0) of 1.25 to 3.0 is similar to or higher than that of seasonal (1.3) or pandemic influenza (1.4 to 1.8) 8, 9 . The use of personal protective equipment is paramount for healthcare staff -significant numbers have been infected in both Italy and China 10 . There is a pressing need to identify effective treatments and preventive measures for COVID-19. Testing, isolation and quarantine measures are key in managing the epidemic, but the development of treatments that shorten disease duration, improve outcome, and reduce infectivity is clearly essential, helping both the individual patient and potentially also limiting spread. While novel agents, such as remdesivir, are in development, these will not be available to the vast majority of patients within coming months 11, 12 . However, repurposing older drugs that are currently licensed and manufactured has the potential to have dramatic impact at both individual and population levels, since roll-out of treatment to the wider population is feasible, affordable and safe. The choice of such drugs to trial should be driven by evidence of in vitro efficacy, plausibility and deliverability of the intervention. Scientific rationale COVID-19 is a respiratory disease caused by a novel coronavirus (SARS-CoV-2) and causes substantial morbidity and mortality. There is currently no vaccine to prevent COVID-19 or therapeutic agent to treat COVID-19. This clinical trial is designed to evaluate potential therapeutics for the treatment of hospitalised people with COVID-19. We hypothesise that chloroquine slows viral replication in patients with COVID-19, attenuating the infection, and resulting in more rapid decline of viral load in throat/nose swabs. This viral attenuation should be associated with improved patient outcomes. Given the enormous experience of its use in malaria chemoprophylaxis, excellent safety and tolerability profile, and its very low cost, if proved effective then chloroquine would be a readily deployable and affordable treatment for patients with COVID-19. Chloroquine. Chloroquine is an antimalarial drug that was discovered in 1934 and has been widely prescribed for malaria since 1947 13 . It has been safely prescribed to millions of people in all income settings since then. Chloroquine is inexpensive and simple to administer, is a first-line treatment for non-falciparum malaria, and is on the World Health Organization's List of Essential Medicines 14 . Chloroquine has recently been reported as a potential broad-spectrum anti-viral drug 15 . It was found to have significant activity in vitro against the SARS-CoV responsible for the 2003 SARS outbreak in at least 30 countries, where it has been shown to block virus infection by increasing the pH required for viral fusion and by interfering with the glycosylation of cellular receptors of SARS-CoV 16 . More recently it has been shown to also have significant in vitro activity against SARS-CoV-2, where it functioned both at the cell viral entry and post-entry stages during experimental infection of Vero E6 cells 15 . A half-maximal effective concentration (EC50 or the concentration associated with a decrease in replication (in Vero E6 cells) of the virus by 50%) of 1.13 µM was reported, with a corresponding EC90 of 6.9 µM 15 . This effect occurred when the drug was given either before or after viral inoculation 15 . Activity of chloroquine in the low micromolar range has been confirmed elsewhere 17 . In addition to its anti-viral effects, chloroquine also has an immunomodulatory effect in vivo, which may synergistically enhance its effect in vivo 15 . Chloroquine has a wide volume of distribution and achieves high lung concentrations following oral administration 18 . The relationship between plasma concentrations and concentrations in respiratory epithelium is not known precisely, though in rats the concentration in the lung is between 124 and 748-fold that in plasma 19 . The effective concentration needed to inhibit 90% of viral replication (EC90) of 6.9uM is higher than the therapeutic exposures needed to treat malaria, but should be clinically achievable, and maintained, with daily doses of chloroquine ≥500mg/day 15 . Chloroquine has been used in patients with COVID-19 in China and South Korea with reported good effect 20 . However, rigorous, peer-reviewed outcome data are currently lacking and thus it is not possible to draw firm conclusions about its efficacy and safety. A major problem with the non-controlled use of untested treatments in disease emergence is that improvements in outcome that occur naturally over time due to improved general management of cases as clinical experience is accrued, are falsely interpreted as being due to the novel therapy. Despite the lack of data, as of 20th March 2020, chloroquine, in a dose of 500mg twice daily, is recommended for mild, moderate and severe COVID-19 cases in China, and is currently recommended for all patients >70 years old with evidence of pneumonia due to SARS-CoV-19 in Italy 21 . Safety. Chloroquine has been used extensively as continuous chemoprophylaxis against malaria for individual periods often exceeding five years and has been the prophylactic drug of choice in pregnancy 22 . It is safe in all age groups. In addition to its antimalarial use, both chloroquine, and the closely related and slightly more hydrophilic hydroxychloroquine, are used in continuous daily dosing for rheumatoid arthritis, systemic and discoid lupus erythematosus and psoriatic arthritis. Chloroquine at a dose of 2.4mg base/kg (155 mg)/day for years is used for rheumatoid arthritis. Chloroquine given at the correct dose has an excellent safety profile. This protocol has been written according to the SPIRIT guidelines 23 . Figure 1 shows the study flowchart. The primary aim of this study is to assess the safety and efficacy of chloroquine for the treatment of hospitalized adults with reverse transcription polymerase chain reaction (RT-PCR) confirmed SARS-CoV-2 infection in Vietnam. Primary objective. To determine if chloroquine results in more rapid clearance of SARS-CoV-2 from throat/nose swabs of patients with COVID-19. • To determine if chloroquine shortens the duration of hospital stay. • To determine if chloroquine results in more ventilator-free days. • To determine if chloroquine use results in better survival compared with standard of care. • To define the safety profile of chloroquine in COVID-19. The study will start with a 10-patient prospective observational pilot study following the same entry and exclusion criteria as for the randomized trial and undergoing the same procedures. All 10 participants will receive chloroquine at the doses used in the trial; they will not be randomized. The purpose of the pilot study is to develop the study procedures for the randomized controlled trial, including the safe monitoring of participants, and to acquire some preliminary data on the safety of chloroquine in those with COVID-19 in Vietnam. Data from these patients will not be included in the final analysis. Once the pilot study has been completed, and the data reviewed by the trial steering committee (TSC), the data monitoring committee (DMC), and the Ministry of Health (MoH) ethics committee (EC), we will proceed with the trial. We aim for minimum delay between completing the pilot study and starting the randomized trial. The main study is an open label, randomised, controlled trial with two parallel arms of standard of care (control arm) versus standard of care with 10 days of chloroquine (intervention arm) with a loading dose over the first 24 hours, followed by 300mg base orally once daily for nine days. The study will recruit patients in three sites in Ho Chi Minh City, Vietnam: the Hospital for Tropical Diseases (HTD), the Cu Chi Field Hospital, and the Can Gio COVID hospital. Additional sites including the National Hospital for Tropical Diseases in Ha Noi, Vietnam will be added based on enrolment rates. All adult patients (≥18 years old) presenting to the study centres with positive throat/nose swabs (RT-PCR) for SARS-CoV-2 and requiring hospital admission will be eligible for study inclusion subject to the inclusion and exclusion criteria. Randomization will be stratified by severity of illness with severe disease being defined by a SpO2 ≤94%, or tachypnea (respiratory rate ≥24 breaths/min), and mild-moderate disease being defined by SpO2 >94% and respiratory rate <24 breaths/min without supplemental oxygen. Primary endpoint. The primary endpoint is the time to viral clearance from throat/nose swab. Viral presence will be determined using RT-PCR to detect SARS-CoV-2 RNA. Throat/ nose swabs for viral RNA will be taken daily while in hospital until there have at least two consecutive negative results. Virus will be defined as cleared when the patient has had ≥2 consecutive negative PCR tests. The time to viral clearance will be defined as the time following randomization until the midpoint between the last positive and the first of the negative throat/nose swabs. • The duration of hospital stay from the time of randomization. • The number of ventilator-free days during the first 28 days following randomization. • The time to (all-cause) death following the first 10, 28, and 56 days since randomization. • The WHO ordinal outcome scale for COVID-19 at days 28, 42 and 56 24 . • Fever clearance time (defined as temperature <37.5°C for 48 hours). • Development of acute respiratory distress syndrome (ARDS) defined by the Kigali criteria 25 . • Oxygenation-free days over the first 28 days. • Risk of serious adverse events (SAEs). • Risk of grade 3 or 4 adverse events (AEs). • Laboratory-confirmed SARS-CoV-2 infection as determined by RT-PCR, or other commercial or public health assay in any specimen <48 hours prior to randomization, and requiring hospital admission in the opinion of the attending physician. • Provides informed consent prior to initiation of any study procedures (or consent provided by an authorized representative). • Understands and agrees to comply with planned study procedures. • Agrees to the collection of oropharyngeal (OP) swabs and venous blood per protocol. • Male or female adult ≥18 years of age at time of enrolment. • Intractable seizures of history of uncontrolled epilepsy. • History of cardiac arrhythmia requiring on-going anti-arrhythmic therapy. • Alanine aminotransferase (ALT) over five times the upper limit of normal. • Stage 4 severe chronic kidney disease or requiring dialysis (i.e. eGFR < 30). • Anticipated transfer to another hospital that is not a study site within 72 hours. • Allergy to any study medication. • Chloroquine treatment mandated for any other reason e.g. vivax malaria. • Taking a concomitant medication as per Table 1 which cannot be safely stopped or managed. Informed consent. Informed consent to enter into the trial and be randomized must be obtained from all participants (or a person with responsibility e.g. family member/relative as defined by the Vietnam MoH guidelines, if the participants lack capacity), in their own language before enrolment by the site principal investigator (PI) or an appropriately trained clinician 26 . This should be after explanation of the aims, methods, benefits and potential hazards of the trial and before any trial-specific procedures are performed or any blood is taken for the trial, including for the screening assessment. It must be made completely and unambiguously clear that the participant (or their relative) is free to refuse to participate in or withdraw from all or any aspect of the trial, at any time and for any reason, without incurring any penalty or affecting their subsequent treatment. This will be stated explicitly in the participant information sheet (see Extended data 27 ). If consent was provided by a relative, the participant should be consulted and consent recorded if and when they have the capacity to do so. Signed consent forms must be kept in the investigator site file and if possible from an infection control standpoint, a copy given to the participant or family. Due to the biohazard of SARS-CoV-2 contaminated documents, special safety provisions must be made for how source documents are collected and stored. In the study sites, the study participants may be treated in an isolation area or a negative pressure room, where paper documents are not allowed to be taken out as presumed contaminates. In such cases, the study will apply other appropriate methods for obtaining valid informed consent as detailed in the Oxford University Clinical Research Unit (OUCRU) informed consent standard operating procedures (SOP). The clinician will document the signing of the consent form in the participant's medical notes. It may be necessary for a photograph or scanned image of the informed consent signature page to be stored as an "electronic source document" rather than retaining a paper version, which will be destroyed to minimize infection transmission risks. A handheld device (e.g., smartphone) will remain in the high-risk zone for this purpose. This will allow for the information to be transmitted electronically as a PDF for archiving following the OUCRU archiving and destruction of essential documents SOP. The person, who gives consent, will retain a copy of this electronic source document. Where possible a consent form will be re-signed at discharge to ensure a hard copy can be handed to the participant and kept for storage. Copies of the informed consent form in English and Vietnamese are provided as Extended data 27 . Screening and eligibility assessment. After consent has been obtained from the participant or their relative, clinical information including medical history and examination, and weight will be recorded on the case report form (CRF). Routine tests will also be recorded on the CRF as part of the medical history of the current infection. The screening procedures will take place as soon as possible after the clinicians have identified a potential participant from the study hospitals. Recruitment activities will only occur in an in-patient hospital setting and no activities will be carried out outside of the participating hospitals. The target sample size of 240 participants will be enrolled within an anticipated accrual rate of 6-8 months. Randomization and treatment allocation. Randomization will be 1:1 to either chloroquine or standard of care treatment. Randomization with variable block sizes of four and six will be used to assign subjects to treatment. Randomization will be stratified by recruiting centre and disease severity. The randomization list will be generated according to the OUCRU randomization and drug dispensing SOP. In brief, the study statistician will set up statistical code to generate the randomization list and transfer it to the central study phar-macist. The study pharmacist will change the random seed, i.e. the initialization of the random numbers generator, in the statistical code in order to blind the study statistician and then run the code to prepare the final randomization list for treatment preparation. The randomization list will be password protected and stored on a secure server to which only the study pharmacist has access. Based on the randomization list, the study pharmacist will prepare randomization envelopes and generate identical sealed treatment packs for each study ID and distribute them to the sites in batches as required. Each pack will contain sufficient chloroquine for the 10 days of treatment. Enrolment logs specific to each study site will be used to assign participants to the next available sequential number and corresponding sealed treatment pack. Chloroquine will be administered orally, as tablets. For unconscious participants chloroquine can be crushed and administered as a suspension via a nasogastric tube. A loading dose, administered with food where possible, is given on the first study day. Following the first 24 hours, participants will receive a dose of chloroquine phosphate salt of 500mg once daily until 10 days after randomization (unless they are <53kg, when the dose will be reduced, see Table 2 ). Chloroquine has complex pharmacokinetic properties with an enormous apparent volume of distribution (200-300 L/kg) and a terminal elimination half-life of 1-2 months, so concentrations in plasma (and rapidly exchanging tissue compartments) are determined predominantly by distribution not elimination. Given that EC50 values against the SARS-CoV-2 virus are in the low micromolar range in vitro, which suggests moderate activity, it is likely that relatively high concentrations will be required for maximum effects in vivo. The intervention dose was chosen following evidence review and discussion with healthcare partners. We aimed for an initial loading dose of 10mg/kg base, followed by 5mg/kg base at 6 hours and every 24 hours until 10 days after randomization. The (maximum) dosage used for an adult ≥53 kg is as shown in Table 2 . Patients with severe renal impairment or elevated blood transaminases are excluded from the study (see inclusion and exclusion criteria). If an enrolled patient develops cardiac arrhythmia or syncope, chloroquine will be stopped and an electrocardiogram (ECG) performed. Electrolytes (Na, K, Ca, Mg) will be checked and corrected as necessary. Where significant asymptomatic QTc prolongation is identified (>500ms), chloroquine administration will be interrupted, electrolytes checked and corrected, and the drug reintroduced when the QTc is <480ms. Particular care will be taken in those who take other medications, which may prolong QTc. Clinical assessment: All participants will have a full clinical assessment including medical history and examination by the study team. Data collected will include presenting symptoms, duration of illness, past medical history, current medications, and physical examination findings including vital signs (pulse, temperature, blood pressure, oxygen saturations and FiO2), and the results of cardiovascular, respiratory, gastrointestinal and neurological examination in line with standard clinical practice. Radiology: The results of any radiological imaging (chest X-ray, computed tomography (CT) scan, lung ultrasound) performed during the participant's illness will be recorded in the clinical reporting form. Participants who have not had a chest X-ray will undergo a chest X-ray on study entry. Biological specimens and laboratory evaluations: On study entry all participants will have a review of clinical investigations done so far. Where an investigation has been performed within the last 24 hours, the results will be recorded, and they will only be repeated if clinically indicated. Study entry laboratory tests will be performed as per the study schedule below. Subsequent assessments. Participants will have daily assessment as per standard of care while in-patient by the hospital staff. While in-patient the study will collect the following data: peripheral oxygen saturation (pulse oximeter), respiratory rate, and FiO2. The use of a ventilator or other non-invasive ventilation device will be recorded each day. Participants will have clinical assessment recorded as per the trial assessment schedule (see Table 3 ). The decision to discharge patients will be at the discretion of the attending physician and will depend upon the clinical status of the patient. According to current standard of care, recovery and hospital discharge is dependent upon the patient having had at least two daily consecutive negative PCR throat/nose swabs. Following discharge, participants will be seen on days 28, 42 and 56 post-randomization. In a subset of participants admitted to HTD we will monitor ECG changes, using real-time monitoring. Participants will have up to one-hour ECG continuous recordings daily. The ECG recording will be downloaded from a standard monitor (GE Careview) and stored electronically. ECG changes (including QT interval) will then be analysed by machine learning. Each participant has the right to withdraw from the trial at any time. In addition, the investigator may discontinue a participant from the trial at any time if the investigator considers it necessary for any reason including: • Ineligibility (either arising during the trial or retrospectively having been overlooked at screening) • Significant protocol deviation • Significant non-compliance with treatment regimen or trial requirements • An AE which requires discontinuation of the trial medication or results in inability to continue to comply with trial procedures • Disease progression which requires discontinuation of the trial medication or results in inability to continue to comply with trial procedures • Withdrawal of consent • Loss to follow up If a participant chooses to discontinue their trial treatment (chloroquine), they should always be followed up (providing they are willing) and they should be encouraged not to leave the whole trial. If they do not wish to remain in trial follow-up, however, their decision must be respected and the participant will be withdrawn from the trial. The reason for the participant withdrawing should be ascertained wherever possible. Prior to withdrawing from the trial, the participant will be asked to have assessments performed as appropriate for the final visit although they would be at liberty to refuse any or all components of the assessment. If a participant withdraws from the trial, the medical data collected during their previous consented participation in the trial will be kept and used in analysis. Consent for future use of Thereafter 2 once daily for 9 days 1.5 once daily for 9 days 1.5 once daily for 9 days Physical exam Estimated total blood volumes ml per day ms 6-10 10-12 10-12 10-12 10-12 3-5 stored samples already collected can be refused when leaving the trial early (but this should be discouraged and should follow a discussion). If consent for future use of stored samples already collected is refused, then all such samples will be destroyed following the policies of the institution where the samples reside at the time (local or central storage). Participants may change their minds about stopping trial follow-up at any time and re-consent to participation in the trial. Participants who stop trial follow-up early will not be replaced, as the total sample size includes adjustment for losses to follow-up. The safety profile of chloroquine is well understood and the risks related to chloroquine phosphate/sulphate/hydrochloride are very low, unless the drug is taken in overdose 28 . Most side effects are infrequent. Adverse reactions (ARs) relating to the cardiovascular system, the central nervous system, the skin, hypoglycaemia, hypersensitivity, gastrointestinal, and retinal toxicity have all been described, though usually after high doses and protracted exposures. The main adverse effect is itching in dark-skinned individuals; Africans are much more commonly affected compared to Asians. Adverse effects will be classified and graded according to the Common Terminology Criteria for Adverse Events (CTCAE) system 29 . All serious and grade 3 or 4 AEs will be compared between arms and reported by frequency per arm. An independent DMC will oversee the safety of the trial participants. Definitions. The definitions of the principles of International Conference on Harmonization (ICH) Good Clinical Practice (GCP) apply to this trial protocol (see Table 4 ). AEs include: • An exacerbation of a pre-existing illness • An increase in frequency or intensity of a pre-existing episodic event or condition • Continuous persistent disease or a symptom present at baseline that worsens following administration of the study intervention ARs include any untoward or unintended response to drugs. Reactions to the trial treatment (chloroquine) or comparator should be reported appropriately. In the context of this trial AEs do not include: • Medical or surgical procedures; the condition that leads to the procedure is the AE • Pre-existing disease or a condition present before treatment that does not worsen • Hospitalisations where no untoward or unintended response has occurred, e.g. elective cosmetic surgery, social admissions Death should always be reported as a SAE, regardless of cause. All grade 3 or 4, or SAEs and SARs, whether expected or not, should be recorded in the CRF. Nonserious grade 1 or 2 AEs need not be recorded unless they are thought to be related to the trial treatment or they result in a change or interruption in treatment. A laboratory abnormality must be recorded as a clinical AE only if it is associated with an intervention. Intervention includes, but is not limited to, discontinuation of a current treatment, dose reduction/delay of a current treatment, or initiation of a specific treatment. In addition, any medically important laboratory abnormality may be reported as an AE at the discretion of the investigator. This would include a laboratory result for which no intervention is needed, but the abnormal value suggests a disease or organ toxicity. Laboratory events will be graded according to CTCAE definitions. When an AE or AR occurs, the investigator responsible for the care of the participant must first assess whether or not the event is serious using the definition given in Table 4 . If the event is serious and not only related to COVID-19, or is fatal, then an SAE Form must be completed and the OUCRU clinical trials unit (CTU) notified within 24 hours. Adverse event (AE) Any untoward medical occurrence in a participant or clinical trial subject to whom an investigational medicinal product has been administered including occurrences that are not necessarily caused by or related to that product. Adverse reaction (AR) Any untoward and unintended response to an investigational medicinal product related to any dose administered. Unexpected adverse reaction (UAR) An AR, the nature or severity of which is not consistent with the information about the investigational medicinal product in question set out in the Summary of Product Characteristics (SPC) for that product. Serious adverse event (SAE) or serious adverse reaction (SAR) or suspected unexpected serious adverse reaction (SUSAR) Respectively any AE, AR or UAR that: • Results in death • Is life-threatening* • Requires hospitalisation or prolongation of existing hospitalisation** • Results in persistent or significant disability or incapacity • Consists of a congenital anomaly or birth defect • Is another important medical condition*** * The term life-threatening in the definition of a serious event refers to an event in which the participant is at risk of death at the time of the event; it does not refer to an event that hypothetically might cause death if it were more severe, for example, a silent myocardial infarction. ** Hospitalisation is defined as an inparticipant admission, regardless of length of stay, even if the hospitalisation is a precautionary measure for continued observation. Hospitalisations for a pre-existing condition (including elective procedures that have not worsened) do not constitute an SAE. *** Medical judgement should be exercised in deciding whether an AE or AR is serious in other situations. The following should also be considered serious: important AEs or ARs that are not immediately life-threatening or do not result in death or hospitalisation but may jeopardise the subject or may require intervention to prevent one of the other outcomes listed in the definition above. The severity of all AEs and/or ARs (serious and non-serious) in this trial should be graded using the toxicity gradings in Toxicity grading and management (CTCAE) 29 . The investigator must assess the causality of all serious events or reactions in relation to the trial therapy (chloroquine) using the definitions in Table 5 . There are five categories: unrelated, unlikely, possible, probable, and definitely related. If the causality assessment is unrelated or unlikely to be related, the event is classified as an SAE. If the causality is assessed as possible, probable or definitely related, then the event is classified as an SAR. If an AE or AR is not expected with COVID-19 disease or with chloroquine, then it is unexpected. An unexpected adverse reaction (UAR) is one not previously reported in the current Summary of Product Characteristics (SPC) at the time the event occurred, or one that is more frequent or more severe than previously reported. The definition of an UAR is given in However, treatment in this study will be directly observed as an in-patient and therefore is extremely unlikely. Regulatory reporting. All SAEs will be reported as soon as possible to the MoH ethics committee. An initial written report of an SAE resulting in death, or that is life threatening, has to be reported urgently within seven working days of the study team becoming aware of the SAE. Other SAEs must be reported within 15 working days of the study team becoming aware of the SAE. Additional medical information of the SAE's development must be reported in an additional report until the trial participant recovers or stabilizes without further changes expected. The format and content of the initial report should follow the Vietnam MoH EC report template and include all information available at the time of reporting. All SAEs will be reported to OxTREC in the annual review form and to the DMC in accordance to the DMC charter. An independent DMC will oversee the safety of the trial. A DMC Charter will describe the membership of the DMC, relationships with other committees, terms of reference, decision-making processes, and the approximate timing and frequency of interim analyses (see Extended data 27 ). At the interim analyses, the DMC will receive a report including summaries of mortality, SAEs, grade 3 and 4 AEs, and the time to viral clearance (defined as the time following randomization until the midpoint between the last positive and the first of the negative throat/nose swabs) by treatment arm. The report will be prepared by the study statistician and distributed to all DMC members for review. Based on these data, the committee will make recommendations on the continuation, cessation or amendment of the study. The DMC will perform a first safety analysis after the first 10 participants from the pilot study have completed the allocated two-week treatment or died. A second and a third interim safety analysis will be performed assessing safety, efficacy and futility, after 60 and 120 participants have completed the allocated two-week treatment or died. Stopping for harm of chloroquine will be considered if a safety issue emerges which is sufficiently large, in the judgement of the DMC, to suggest that continued exposure of participants to the drug is unethical. Early stopping for efficacy is not foreseen, as this is a study with a virological rather than survival endpoint. However, if chloroquine truly appears to have an extraordinary beneficial effect then the DMC will be able to recommend this to the TSC. The DMC will be able to mandate additional safety analyses at any time point they deem fit. As the dissemination of preliminary summary data could influence the further conduct of the trial and introduce bias, access to interim data and results will be confidential and strictly limited to the involved statistician and the monitoring board and results (except for the recommendation) will not be communicated to the outside and/or clinical investigators involved in the trial. Further reviews will be at the discretion of the DMC. All DMC reports, replies or decisions will be sent to the responsible research ethical committees. A protocol deviation is any non-compliance with the clinical trial protocol or GCP requirements. If such a deviation results in an impact on patient safety or scientific integrity it becomes a protocol violation. The non-compliance may be either on the part of the participant, the investigator, or the study site staff. Whenever violations occur, corrective actions are to be developed by the site and implemented promptly. It is the responsibility of the site investigators to use continuous vigilance to identify and report protocol deviations and violations. All deviations and violations must be documented in source documents and reported to the OUCRU CTU within two days of being identified. In addition, protocol violations must be reported to the relevant ethics committees. Sample size calculation. We assume the viral clearance time from throat/nose swabs to have a log-normal distribution. Using data from 14 patients in Ho Chi Minh City and Singapore, we estimated the mean time to clearance (natural log scale) of 2.17 days and standard deviation 0.74 (Dr. Hsu Li Yang, personal communication, 13 February 2020 and Dr. Nguyen Van Vinh Chau, personal communication, 12 February 2020). 120 patients will give 80% power to detect a reduction in the time to viral clearance by at least a factor 0.68. If the trial completes enrolment and is not stopped early due to safety, efficacy or futility, we will enrol 240 patients. A sample size of 240 patients will give 80% power to detect a reduction in the time to viral clearance by at least a factor 0.76. Statistical and analytical plans. Study analysis will be according to an a priori defined statistical analysis plan, which will be completed before database locking. The primary endpoint is virological and robust and will be performed by technicians unaware of the treatment allocation of the patient. In that sense the study therefore is blinded. The time to viral clearance will be defined as the time from randomization to the midpoint between the last positive and the first of at least two consecutive daily negative viral PCR tests on throat/nose swab. Data will be illustrated with time to event curves and analysed using the log rank test and Cox model. Survival until 56 days after randomization Overall survival will be visualized using Kaplan-Meier curves and modelled using the Cox proportional hazards regression model with stratification by disease severity. In addition, survival will be modelled with a multivariable Cox regression model including the following covariates in addition to the treatment group: age, comorbid conditions (hypertension, cardiac disease, diabetes, ACEA inhibitor or angiotensin receptor blocker use). The frequency of serious and grade 3 and 4 ARs as well as the frequency of specific AEs will be summarized (both in terms of the total number of events as well as the number of participants with at least one event). The proportion of participants with at least one such event (overall and for each specific event separately) will be summarized and (informally) compared between the two treatment groups based on Fisher's exact test if the expected number in one of the cells is at most one and the chi-square test if expected number in each cell is larger than one. Analysis of other secondary outcomes 1. The duration of hospital stay from the time of randomization. 2. The number of ventilator-free days during the first 28 days following randomization. 3. The time to (all-cause) death following the first 10, 28, and 56 days since randomization. 4. The WHO ordinal outcome scale for COVID-19 at days 28, 42 and 56 23 . 5. Fever clearance time. 6. Development of ARDS defined by the Kigali criteria 25 . 7. Oxygenation-free days over the first 28 days. The analyses of these secondary endpoints will be defined in the statistical analysis plan. The primary null hypothesis is that the rate of clearance of virus from throat/nose swabs is not different between chloroquine compared with standard of care therapy. The primary analysis population for all analysis is the full analysis population containing all randomized participants except for those mistakenly randomized without COVID-19. Participants will be analysed according to their randomized arm (intention-to-treat). In addition, the primary endpoint will be analysed on the per-protocol population, which will exclude the following participants: major protocol violations and those receiving less than one week of administration of chloroquine for reasons other than death. Data collection and entry. Source documents are where data are first recorded, and from which participants' CRF data are obtained. These include, but are not limited to, hospital records (from which medical history and previous and concurrent medication may be summarised into the CRF), clinical and office charts, laboratory and pharmacy records, radiographs, and correspondence. CRF entries will be considered source data if the CRF is the site of the original recording (e.g. there is no other written or electronic record of data). Data collection is the responsibility of the clinical trial staff at the site under the supervision of the site PI. The investigator is responsible for ensuring the accuracy, completeness, legibility, and timeliness of the data reported. All trial data will be recorded on to paper CRFs and will be entered into CliRes, a 21 CFR Part 11-compliant data capture system hosted by the OUCRU IT department. The participants will be identified by a unique trial specific number and/or code in any database. The name and any other identifying detail will not be included in any trial data electronic file. The data system includes password protection and internal quality checks, such as automatic range checks, to identify data that appear inconsistent, incomplete, or inaccurate. Record retention. CRFs, clinical notes and administrative documentation will be kept in a secure location and held for 15 years after the end of the trial. Clinical information will not be released without written permission, except as necessary for monitoring, auditing and inspection purposes. During this period, all data should be accessible to the competent authorities with suitable notice. Electronic data will be kept for at least 20 years at the OUCRU CTU. Publication and data sharing policy. The Trial Mamagement Group (TMG) will maintain a list of all investigators and OUCRU staff part of the OUCRU COVID research group. This research group will be cited in the authorship list and the full list presented in the acknowledgements section at the end of the paper. This list will include investigators who contributed to the investigation being reported but who are not members of the writing committee. In principle, substudy reports should include all investigators for the main study, although in some instances where a smaller number of investigators have made any form of contribution, it may be appropriate to abbreviate the listing. All headline authors in any publication arising from the main study or sub-studies must have a made a substantive academic or project management contribution to the work that is being presented. "Substantive" must be defined by a written declaration of exactly what the contribution of any individual is believed to have been. In addition to fulfilling the criteria based on contribution, additional features that will be considered in selecting an authorship group will include the recruitment of participants who contributed data to any set of analyses contained in the manuscript and/or the conduct of analyses (laboratory and statistical), leadership and coordination of the project in the absence of a clear academic contribution. In line with Wellcome Trust policy that the results of publiclyfunded research should be freely available, manuscripts arising from the trial will, wherever possible, be submitted to peer-reviewed journals that enable open access via UK PubMed Central (PMC) within six months of the official date of final publication. All publications will acknowledge the trial's funding sources. In line with research transparency and greater access to data from trials OUCRU's clinical trials are registered at ClinicalTrials.gov (NCT04328493, 31/03/2020) and a data sharing policy is in place. This policy is based on a controlled access approach with a restriction on data release that would compromise an ongoing trial or study. Data exchange complies with Information Governance and Data Security Policies in all of the relevant countries. Quality assurance and quality control Risk assessment. The quality assurance (QA) and quality control (QC) considerations have been based on a formal risk assessment, which acknowledges the risks associated with the conduct of the trial and how to address them with QA and QC processes. QA includes all the planned and systematic actions established to ensure the trial is performed and data generated, documented and/or recorded and reported in compliance with the principles of ICH GCP and applicable regulatory requirements. QC includes the operational techniques and activities done within the QA system to verify that the requirements for quality of the trial-related activities are fulfilled. Central monitoring at OUCRU CTU. Data from each site collected on the paper CRFs will be double entered and stored on a central database in OUCRU. This database will be checked at OUCRU CTU for missing or unusual values (range checks) and checked for consistency within participants over time. If any such problems are identified, the site will be contacted and asked to verify or correct the data. OUCRU CTU will also send reminders for any overdue and/or missing data with the regular inconsistency reports of errors. Other essential trial issues, events and outputs will be detailed in the Data Management, Monitoring and Quality Management Plans that are based on the trial-specific risk assessment. On-site monitoring. A site initiation visit will be conducted for each study site by staff from the OUCRU CTU. All essential site staff including the PI, lead pharmacist and lead research nurse must be in attendance. The initiation training will include training in the administration of trial treatment, as well as the trial procedures. Monitoring will then be carried out approximately annually at each site by OUCRU CTU staff. On site monitoring will also be regularly conducted by the site monitors. The frequency, type and intensity of routine monitoring and the requirements for triggered monitoring will be detailed in the Monitoring Plan, which will also detail the procedures for review and sign-off. The monitoring will adhere to the principles of ICH GCP and the Monitoring Plan. Compliance. The trial (including all sites) will comply with the principles of the Declaration of Helsinki (2008) and will be conducted in compliance with the approved protocol and the principles of GCP. An agreement will be in place between the site and the OUCRU CTU, setting out respective roles and responsibilities. The site will inform the CTU as soon as they are aware of a possible serious breach of compliance. For the purposes of this regulation, a 'serious breach' is one that is likely to affect to a significant degree: • The safety or physical or mental integrity of the subjects in the trial, or Regulatory approval has been given by the Drug Administration of Vietnam (DAV). Any further amendments will be submitted and approved by the relevant ethics committee. Ethical conduct of the study. All participants will receive the best available treatment of COVID-19, following local and national guidelines. They will benefit from the frequent and careful follow-up of their condition throughout the treatment of their disease and for up to 56 days from randomization. The risks and benefits of participation will be communicated in two ways. First, all potential participants or their family members will be given a participant information sheet clearly listing the risks and benefits of the trial (see Extended data 27 ). Second, all potential participants (or their families) will be able to discuss participation with their consulting doctor who will be able to address questions not covered or arising from the participant information sheet. The trial protocol will seek ethical approval from the Oxford Tropical Research Ethics Committee and the Vietnam Ministry of Health to include incapacitated, comatose adults in the trial as we consider many of these adults will have the most severe disease and therefore represents the group that might stand most to gain from chloroquine. It is unknown if participants who receive the study treatment will benefit. All participants will receive the best-available standard-of-care in Vietnam. The study will use a drug that has been studied thoroughly and its toxicities are well described. Chloroquine has been given to very large numbers of people worldwide in clinical trial settings and in clinical practice. The trial will be recruiting sick participants, but site investigators have considerable experience with this population. This will minimise the risks to the participants and the trial. A detailed risk assessment will be conducted prior to starting the trial. COVID-19 is an infectious disease and there is a risk of transmission to health care workers and study personnel who visit clinical areas. Personal protective equipment will be used as per Vietnamese guidelines and availability. Confidentiality. The investigator must assure that participants' anonymity will be maintained and that their identities are protected from unauthorised parties. Participants will be assigned a trial identification number and this will be used on CRFs; participants will not be identified by their name. The investigator will keep a participant trial register showing identification numbers, surnames and date of birth. This register will be kept securely on a password protected, encrypted computer in a dedicated password protected folder with limited regulated access. This unique trial number will identify all laboratory specimens, case record forms, and other records and no names will be used, in order to maintain confidentiality. Expenses. Treatment and hospital costs from enrolment to discharge from hospital for all actively enrolled participants will be coverred by the State Budget. The study funding will cover study specific screening tests and study procedures up to day 56 from enrolment including travel expenses for the participants to attend follow-up visits. The study will not cover the cost of treating pre-existing diseases or those unrelated to study participation or the diagnosis and/or treatment of COVID-19. The trial is funded by the Oxford University Clinical Research Unit and the Minsitry of Health, Vietnam. The conduct of this study is sponsored by the University of Oxford. The University has a specialist insurance policy in place: -Newline Underwriting Management Ltd, at Lloyd's of London -which would operate in the event of any participant suffering harm as a result of their involvement in the research. The independence of this study from any actual or perceived influence, such as by the pharmaceutical industry, is critical. Therefore, any actual conflict of interest of persons who have a role in the design, conduct, analysis, publication, or any aspect of this trial will be disclosed and managed. Furthermore, persons who have a perceived conflict of interest will be required to have such conflicts managed in a way that is appropriate to their participation in the trial. The study leadership has established policies and procedures for all study group members to disclose all conflicts of interest. This trial started recruitment on 06 April 2020 and has enrolled two participants so far. The team wrote this protocol on 22 March 2020 and received ethical approval from the Vietnamese MoH on 24 March. At that time, the SARS-CoV-2 pandemic had caused disease in over 170 countries with more than 275,000 cases confirmed, and more than 11,000 deaths 7 . As of 5 May 2020, there are 3,660,055 COVID-19 cases reported worldwide, in 210 countries, with 252,675 reported deaths 7 . There are more than 1,528 COVID-19 studies registered on trial registration sites, of which 486 are randomised controlled trials, but as yet no effective treatment has been clearly defined, rather, numerous uncontrolled studies have been reported, and some retracted 30, 31 . Despite this lack of evidence, untested treatments have made it into national guidelines (notably chloroquine) in Belgium, Italy, China and elsewhere 21, 32, 33 . While this is understandable, particularly where drugs are perceived to have well understood safety profiles, this is not without risk 28 . Deaths have already been reported from self-medication with hydroxychloroquine, and the early adoption of unproven treatments into guidelines can lead to significant difficulties in obtaining rigorous evidence, because of subsequent reluctance by ethical committees to sanction placebo controlled trials 34 . This can mean that good quality data may never emerge. Recent experience with Ebola demonstrates the importance of adequately sized randomized trials with appropriate control arms 35 . To date Vietnam has a total of 271 SARS-CoV-2 cases, with 219 people having recovered and no deaths reported 7,36 . Vietnam's government has been praised for its pro-active and decisive approach with the rapid establishment of a National Steering Committee for Covid-19 Prevention and Control and subsequent implementation of a national response plan including meticulous contact tracing, extensive testing and free health care for SARS-CoV-2 treatments 37 . The Vietnamese MoH has spared no efforts in controlling the spread of SARS-CoV-2 in Vietnam so far, and has also been advocating for and supporting research to properly test interventions in order to best serve their population 38 . The current protocol assesses the safety and efficacy of chloroquine for the treatment of hospitalized adults with laboratory confirmed SARS-CoV-2 infection in Vietnam. Even though the time between the development of the protocol and first patient enrolled was merely three weeks, with the given pace of new and on-going chloroquine trials globally, the study team has considered stopping this trial early on grounds of futility (based on information from the Milken Institute and ClinicalTrials.gov). The setup of this first trial within hospitals and quarantine centres has been a tremendous endeavour and the study team now has the infrastructure and trained clinical staff in place to be able to conduct clinical research in this challenging context. If emerging data reveals that chloroquine is either ineffective or dangerous before this trial is completed, it has been successful in building a network of centres and collaborators ready to trial other COVID-19 interventions. Underlying data No underlying data are associated with this article. Is the study design appropriate for the research question? Partly
The COVID-19 is a public health emergency. The COVID-19 patient is the main source of infection, and asymptomatic infective patient can also be a source of infection (Bhadelia 2020) . The main route of transmission is through respiratory droplets and contact (Burki 2020) . It is not clear whether the 2019-nCoV is transmitted through the mucous membrane of the eye. S-protein of SARS-like coronavirus can interact with human ACE2 protein and infect human respiratory epithelial cells (Ge et al. 2013 ). The human cornea and conjunctiva express ACE2 receptor, which can theoretically bind to the 2019-nCoV and cause infection. Our hospital is from the epidemic area. In our hospital, 37 patients of 2019-nCoV infection pneumonia were detected the nucleic acid in the conjunctival sac by real-time RT-PCR. According the Chinese COVID-19 diagnosis and treatment (V) (Huang et al. 2020 ), 12 cases were severe patients, and the others were mild patients. Three cases had conjunctival congestion and other inflammatory appearance. One case of severe patients, which conjunctival sac secretion nucleic acid test was positive by real-time RT-PCR, but this severe patient had no conjunctivitis. The other 36 patients were negative in nucleic acid test of conjunctival secretion. Therefore, we must pay attention to the possibility of virus in conjunctival sac secretion of COVID-19 patients and need to further detect the virus in conjunctival sac secretion as evidence. The viral load of conjunctival sac secretion of COVID-19 patients is relatively low, and we estimate the viral load is directly proportional to the severity of the disease. Whether the 2019-nCoV can be transmitted through conjunctiva is further studied.
According to currently available genome sequencing data, SARS-CoV-2 is a novel zoonotic enveloped positive-sense single-stranded RNA virus from the Coronaviridae family that was identified in the 1960s [12] . SARS-CoV-2 shares 96% identity with a bat coronavirus (BatCoV-RaTG13) [13] , 91.02% with pangolin-CoV [14] , and 79.5% with severe acute respiratory syndrome coronavirus (SARS-CoV) [15] , respectively, at the whole-genome level. The original severe acute respiratory syndrome coronavirus (SARS-CoV) [16] , Middle-East respiratory syndrome coronavirus (MERS-CoV) [17] , and SARS-CoV-2 belong to a β-coronavirus genus which infects mammals and humans. SARS-CoV emerged in China in November 2002; the SARS epidemic ended abruptly in July 2003, with no human SARS cases detected since 2004 [16] . Although both SARS-CoV and SARS-CoV-2 originated from, and are closely related with, bat coronavirus, whether the SARS-CoV-2 has an intermediate host remains unknown. Additional sequencing data from other wild animals and mammals are required to confirm the source and origin of SARS-CoV-2. According to the cryogenic electron microscopy (TEM) images, the SARS-CoV-2 virion is crownshaped with a diameter of ~50-200 nm [18] , having four structural proteins: spike (S), envelope (E), membrane (M), and nucleocapsid (N) (illustrated in Figure 1 ). The S, E, and M proteins are responsible for viral envelope generation and the N protein carries the RNA genome (~30 kb). Of note, the spike protein is the glycoprotein that facilitates SARS-CoV-2 attachment, fusion, entry, and transmission into host cells by binding with human angiotensin converting enzyme 2 (hACE2) receptors [19] , which are expressed by epithelial cells of the lung, intestine, kidney, blood vessels, and oral mucosa [20] . The detailed mechanism of how SARS-CoV-2 S protein binds with hACEs, and ultimately leads to pathological organ damage, remains unknown and requires further investigation. While the modes of SARS-CoV-2 spreading are still being investigated, human-to-human airborne transmission of the virus has been confirmed during breathing, coughing, sneezing, and conversing in close contact (1-3 metres). Airborne transmission of the virus appears to be a primary mode for the spread of COVID-19, with positive viral RNA detected in air samples between two isolated patients (>1.8 metres distancing), as well as in air samples outside patients' isolation rooms [21] . The extent to which SARS-CoV-2 virus can travel over longer distances is currently unknown [22] , although anecdotal evidence from the rapid and widespread spread in environments such as cruise ships, where people have been confined to their cabins and practice hand hygiene, suggest that the virus can travel over longer distances possibly via internal ventilation systems. The SARS-CoV-2 virus can survive on a variety of surfaces, including on plastic for 72 h, on stainless steel for 48 h, on copper for 8 h, cardboard after 24 h [23] , and on a surgical mask for 7 days While the modes of SARS-CoV-2 spreading are still being investigated, human-to-human airborne transmission of the virus has been confirmed during breathing, coughing, sneezing, and conversing in close contact (1-3 metres). Airborne transmission of the virus appears to be a primary mode for the spread of COVID-19, with positive viral RNA detected in air samples between two isolated patients (>1.8 metres distancing), as well as in air samples outside patients' isolation rooms [21] . The extent to which SARS-CoV-2 virus can travel over longer distances is currently unknown [22] , although anecdotal evidence from the rapid and widespread spread in environments such as cruise ships, where people have been confined to their cabins and practice hand hygiene, suggest that the virus can travel over longer distances possibly via internal ventilation systems. The SARS-CoV-2 virus can survive on a variety of surfaces, including on plastic for 72 h, on stainless steel for 48 h, on copper for 8 h, cardboard after 24 h [23] , and on a surgical mask for 7 days [24] , subject to favourable humidity and temperature. Like other coronaviruses, SARS-CoV-2 virus can be stored at −80 • C for several years and inactivated at 56 • C for 30 min. Additionally, 75% ethanol, 0.1% sodium hypochlorite, and 0.5% hydrogen peroxide can inactivate SARS-CoV-2 [25] . The incubation period in susceptible COVID-19 patients is 1-14 days, with an average of 3-7 days [18] . From the existing data, the SARS-CoV-2 virus can be detected in multiple sources, including gastrointestinal tissue [26] , tears [27] , stool [28] , blood [29] , and saliva [30] [31] [32] [33] of COVID-19 patients. Initial mathematic modelling suggests that the basic reproductive number (R 0 ) of SARS-CoV-2 is expected to be 1.4-3.9 [34] , indicating that one infection would lead to 1.4 to 3.9 new infections with no interventions; where R 0 estimates may vary upon biological, social-behavioural and environment factors [35] . Rapid identification and publication of the virus' genome sequence have facilitated the development of diagnostic methods, as well as the race to develop a vaccine. The standard method of COVID-19 detection is reverse transcription polymerase chain reaction (RT-qPCR), generally used to detect viral RNA from nasopharyngeal and oropharyngeal swabs or sputum samples. Qualitative reverse transcription polymerase chain reaction (RT-qPCR) assays are easier to validate than quantitative assays and are preferred for diagnostics. Furthermore, a chest X-ray could be a useful diagnostic tool to detect bilateral pneumonia, presenting as multilobar ground-glass opacities with a peripheral, asymmetric, and posterior distribution [36] . Alarmingly, some healthcare patients remain viral RNA positive 13 days after hospital discharge and may even relapse [37] , suggesting a virus-eliminating immune response to SARS-CoV-2 may not occur in some patients. As of 19 March 2020, a serology antibody test to detect immunoglobulin G (IgG) and IgM was approved by the FDA as a point-of-care test, though is not yet widely used. It is likely that as the pandemic reaches the next phases, increased focus will be placed on monitoring immunity within the population. Airborne transmission of viruses can generally occur in two ways: either through relatively large droplets of respiratory fluid (10-100 µm) or through smaller particles called aerosols (<10 µm). The larger droplets are pulled to the ground by gravity quickly and hence transmission requires close physical proximity, whereas aerosolised transmission may occur over larger distances and does not necessarily require infected and susceptible individuals to be co-located at the same time [38] . Respiratory and salivary droplets appear to be the main transmission routes of COVID-19 disease through inhalation, ingestion, and/or direct mucous contact [39] . Indeed, it has been suggested that such droplets can travel up to four metres with an uncovered cough [40] . It has also been shown that the SARS-CoV-2 virus can survive in aerosols in an experimental setting [23, 24] , but it is unclear to what such particles are generated in "real-life" situations, and whether such particles are sufficient to cause an infection. Therefore, the aerosol route for COVID-19 transmission requires further verification in clinical settings, taking into account the presence of patients and health workers, air circulation and other environmental factors. The potential for transmission via salivary bioaerosols poses a particularly significant danger to healthcare workers that operate in close proximity to the face and oral cavities, such as dental practitioners; oral-maxillofacial surgeons; ear, nose, and throat (ENT; otorhinolaryngology) surgeons; and ophthalmologists, especially when carrying out procedures that generate aerosols [41, 42] . Indeed, the COVID-19 outbreak has resulted in the significant curtailment of services provided by these health professionals, posing a significant public health problem, as important and highly prevalent oral and ENT conditions cannot be adequately treated during this epidemic [41] [42] [43] [44] . Thus, understanding the role of salivary aerosols in COVID-19 transmission is imperative, as is an appreciation of the effect of various environmental and therapeutic interventions on the extent of aerosol creation, and the development of strategies to minimise the risk to both health professionals and patients alike. The role of pre-procedural rinsing [45] with disinfectant mouthwash needs to be explored in this context. Similarly, the use of personal protective equipment, such as masks and respirators which could be effective in preventing the airborne transmission of coronavirus RNA [46] , needs to be tested in clinically relevant situations where droplets and aerosols are generated from biofluids (including saliva) during medical procedures. Similarly, high volume suction and use of filtration air-systems, especially in clinical settings where aerosols (including those from saliva) can be generated by surgical procedures, requires further investigation. The timing (highest viral titres) and specimen collection sources can significantly influence the diagnostic sensitivity of SARS-CoV-2 detection tests. One study reported that oropharyngeal swabs (n = 398) were more often used than nasopharyngeal swabs (n = 8) in China during the COVID19 outbreak; however, SARS-CoV-2 RNA was detected in only 32% of oropharyngeal swabs [47] . On 19 March 2020, the World Health Organisation (WHO) recommended that both upper (nasopharyngeal and oropharyngeal swabs) and lower (sputum, bronchoalveolar, or lavage endotracheal aspirate) respiratory specimens should be collected; however, upper respiratory samples may fail to detect early viral infection and the collection of lower respiratory specimens increases biosafety risk to healthcare workers via aerosol/droplets formation. As the SARS-CoV-2 virus shedding progresses, additional samples sources, such as stool, saliva, and blood, can be used as alternatives, or combined with respiratory specimens. However, only 15% of patients hospitalised with pneumonia had detectable SARS-CoV-2 RNA in serum [48] , and 55% of patients showed positive SARS-CoV-2 RNA in fecal samples [49] . Conversely, in saliva samples, it was reported from different clinical studies that 87%, 91.6%, and 100% of COVID-19 patients were identified as being viral positive, respectively [30, 31, 33] , suggesting that saliva is a powerful specimen source for the diagnosis of the SARS-CoV-2 virus. Saliva also represents an attractive biofluid source option for the detection of SARS-CoV-2, due to being non-invasive, easy-to-access, and low-cost, as well as having the ability to "mirror" systemic and local disease status [50] . It is well-known that saliva harbors a wide range of circulatory components (Figure 2) , such as pro-inflammatory cytokines [51, 52] , chemokines [53] , matrix metalloproteinases [54, 55] , mitochondrial DNA [56] , genomic DNA [57] , bacteria [58] , SARS-CoV and SARS-CoV-2 virus [30, 31, 59] , SARS-CoV antibodies [59] , miRNAs [60] , and extracellular vesicles (EVs) [61] . Furthermore, saliva samples can be stored at -80 • C for several years with little degradation [62] . It is preferable to aliquot and freeze the samples to avoid freeze-thaw cycles. For salivary RNA research, it was discovered that saliva samples can be stored in Trizol for more than two years at -80 • C without adding RNase inhibitors [63, 64] , suggesting such specimens can be used for future diagnostics. Thus, saliva may be a valuable specimen to collect in COVID-19 patients at different time points during disease onset progression and follow-up. Indeed, saliva may be useful for both diagnosing the presence and sequelae of COVID-19 infection, as well as identifying and tracking the development of immunity to the virus. Saliva has been widely investigated as a potential diagnostic tool for chronic systemic and local (oral) diseases [50] , with less attention given to its utility in acute infectious diseases, such as COVID-19. The salivary gland can be infected by SARS-CoV-2 virus resulting in the subsequent release of viral particles or antibodies into saliva, as evidenced in Rhesus macaque primates where salivary gland epithelial cells were the first target cells for SARS-CoV infection [59] . This is likely to be facilitated by the high expression of hACE2 (SARS-CoV-2 receptor) on the epithelial cells of the oral mucosa, as demonstrated using single-cell RNA sequencing [65] . Saliva and throat wash (by gargling 10 mL saline) samples from 17 SARS-CoV patients were found to be SARS-CoV RNA positive, with the highest detection rate a median of four days after disease onset and during lung lesion development [66] . Saliva samples from 75 patients successfully validated saliva as a viable biosample source for COVID-19 detection when compared to nasopharyngeal or oropharyngeal swabs [67] . At present, only three clinical studies (Table 1 ) and one animal model have investigated the use of salivary diagnostics for COVID-19. SARS-CoV-2 was detected in self-collected saliva (by asking the patients to expectorate saliva) in 11 out of 12 confirmed cases [31] . Another recent study found that 100% of COVID-19 patients (n = 25) were detected as viral positive in drooling saliva samples [33] . Further, in a cohort of COVID-19 positive patients, it has been demonstrated that 87% of posterior oropharyngeal (deep throat) saliva samples were detected viral positive (n = 23), and serial respiratory viral load of SARS-CoV-2 was detected from week 1 and up to 25 days after symptom onset, while serum (n = 16) samples showed positive RT-qPCR detection only 14 days after symptom onset [30] . Additionally, Kim et al. demonstrated that SARS-CoV-2-infected ferret animals shed virus in nasal washes, saliva, urine, and feces up to eight days post-infection and ferret-to-ferret Saliva has been widely investigated as a potential diagnostic tool for chronic systemic and local (oral) diseases [50] , with less attention given to its utility in acute infectious diseases, such as COVID-19. The salivary gland can be infected by SARS-CoV-2 virus resulting in the subsequent release of viral particles or antibodies into saliva, as evidenced in Rhesus macaque primates where salivary gland epithelial cells were the first target cells for SARS-CoV infection [59] . This is likely to be facilitated by the high expression of hACE2 (SARS-CoV-2 receptor) on the epithelial cells of the oral mucosa, as demonstrated using single-cell RNA sequencing [65] . Saliva and throat wash (by gargling 10 mL saline) samples from 17 SARS-CoV patients were found to be SARS-CoV RNA positive, with the highest detection rate a median of four days after disease onset and during lung lesion development [66] . Saliva samples from 75 patients successfully validated saliva as a viable biosample source for COVID-19 detection when compared to nasopharyngeal or oropharyngeal swabs [67] . At present, only three clinical studies (Table 1 ) and one animal model have investigated the use of salivary diagnostics for COVID-19. SARS-CoV-2 was detected in self-collected saliva (by asking the patients to expectorate saliva) in 11 out of 12 confirmed cases [31] . Another recent study found that 100% of COVID-19 patients (n = 25) were detected as viral positive in drooling saliva samples [33] . Further, in a cohort of COVID-19 positive patients, it has been demonstrated that 87% of posterior oropharyngeal (deep throat) saliva samples were detected viral positive (n = 23), and serial respiratory viral load of SARS-CoV-2 was detected from week 1 and up to 25 days after symptom onset, while serum (n = 16) samples showed positive RT-qPCR detection only 14 days after symptom onset [30] . Additionally, Kim et al. demonstrated that SARS-CoV-2-infected ferret animals shed virus in nasal washes, saliva, urine, and feces up to eight days post-infection and ferret-to-ferret transmission occurred only two days post-contact [32] . Notwithstanding the limitations of small sample size and lack of detailed saliva collection methodology, these studies nevertheless imply that saliva is a promising non-invasive alternative specimen for SARS-CoV-2 diagnosis. Further investigations are required to explore the potential role of saliva for COVID-19 detection in both symptomatic and asymptomatic patients. Drooling saliva RT-qPCR 100% of patients were viral positive [33] In summary, the current gold standard diagnostic test is RT-qPCR to detect SARS-CoV-2 RNA which takes approximately 48 h to obtain the test results. More new tests with higher sensitivity and specificity need to be appropriately validated before being implemented into the current routine diagnosis. From early reports on the clinical characteristics of COVID-19, it is now apparent that not all people exposed to SARS-CoV-2 are infected and not all infected patients develop severe symptoms [18] . Indeed, three broad presentations of SARS-CoV-2 infection can be characterised: (i) an asymptomatic incubation stage with or without detectable virus; (ii) non-severe symptomatic presentation with confirmed presence of virus; and (iii) a severe respiratory symptomatic stage with high viral load [68] . Determining the immune status of an individual is likely to become increasingly critical as the COVID-19 pandemic progresses, because from a prevention perspective, individuals at stage I (the stealth carriers or the super spreaders), are particularly important because they may spread the virus unknowingly. Two stages of the immune response during COVID-19 disease progression have been proposed [69] : (1) Immune-defense-based protective phase: elimination of SARS-CoV-2 virus by an individual's adaptive immune response; and (2) inflammation-driven phase: when the protective immune response is impaired and prolonged propagated virus load leads to an adverse inflammatory response in organs with high hACEs expression. Indeed, a likely pathogenic mechanism of SARS-CoV-2 is overactivation of T cells with an increase in CD4+ T Helper cells and enhanced cytotoxicity of CD4+ and CD8+ T cells [70] , which leads to an imbalanced pro-inflammatory and anti-inflammatory cytokine response and severe immune injury in susceptible patients [71] . Although this concept needs to be confirmed by more clinical research, it may provide useful research directions to tackle COVID-19. During the previous SARS outbreak, a common transmission pattern hypothesis was that SARS-CoV virus silently infected asymptomatic patients, which may have led to population immunity against infection (herd immunity) that may explain the eradication of the virus, although this is yet to be confirmed [72] . Although a study suggests that coronavirus antibodies are highly prevalent in the general population after exposure to four non-SARS coronavirus strains [73] , there is no definitive evidence on whether permanent immunity would be generated against other CoV species, such as SARS-COV-2. Notably, after SARS-CoV infection in a murine model, the production of SARS-CoV-specific serum IgG and secretory immunoglobulin A (sIgA) were detected in saliva following intranasal immunisation [74] . In relation to COVID-19, intensive care unit (ICU) patients had higher plasma levels of pro-inflammatory cytokines, including IL-2, IL-7, IL-10, GSCF, IP10, MCP-1, MIP-1A, and TNF-α, compared with non-ICU patients [48] , suggesting the emergence of a robust immune-inflammatory response in severe symptomatic COVID-19 patients. Importantly, several studies have demonstrated that COVID-19 patients developed IgG and IgM antibodies against SARS-CoV-2 in blood samples. Both IgG and IgM antibodies against the SARS-CoV-2 nucleoprotein and spike receptor-binding domain were increased in serum at day 10 after symptom onset for up to three weeks [30] . A point-of-care lateral flow immunoassay (LFIA) test product (VivaDiag COVID-19 IgM/IgG Rapid Test) was designed to detect IgM and IgG in blood samples of COVID-19 patients in 15 min [75] . However, the sensitivity of the VivaDiag COVID-19 IgM/IgG Rapid Test was only 18.4% in blood samples of acute COVID-19 patients from the emergency department [76] , suggesting that serological tests require more research before being deemed suitable for routine diagnosis. Additionally, the seroconversion rate for total antibodies, IgM, and IgG were shown to be 93.1%, 82.7%, and 64.7%, respectively, in hospitalised COVID-19 patients, peaking 7-14 days after symptom onset [77] . Given the non-invasive and cost-effective nature of saliva collection, it would be important to investigate whether this immunity detection is feasible in saliva samples as a tool for facilitating the testing of COVID-19 immunity at the population-level. SARS-CoV-2 is present in saliva by entering the oral cavity through several routes, including direct infection of oral mucosa lining cells, via droplets from the respiratory tract, from the blood circulation through gingival crevicular fluid, or by extracellular vesicles secreted from infected cells and tissues, as described in [43] . As such, saliva is a common route for the transmission of the virus, including airborne transmission via routine activity such as speaking and sneezing, as well as infection-associated symptoms such as sneezing and coughing. Transmission via saliva may represent a particular threat to health workers who work in close proximity to, and undertake procedures within, the oral cavity. Aside from salivary viral RNA testing by RT-qPCR, we propose that salivary ELISA of IgM/IgG against SARS-CoV-2, SARS-CoV-2 double-membrane extracellular vesicles (EVs) isolation, anti-SARS-CoV-2 surface proteins, viral titres load, CD4+/CD8+ T cells derived EVs, and pro-inflammatory cytokines could be potential diagnostic and prognostic biomarkers for COVID-19 disease (Figure 3) . A salivary test would be particularly important for improving the effectiveness and efficiency of prevention strategies for healthcare professionals, especially when performing aerosol-related procedures. Indeed, an ideal saliva test would be a disposable "off-the-shelf" device that could be used at home by individuals, without exposing them or others to a potential environmental virus infection risk. In conclusion, although saliva is currently perceived as a foe in the battle against COVID-19 due to it being a prominent source for disease transmission via droplets and possibly aerosols, it is also apparent that it can be harnessed as a friend in the detection of the virus and an individual's immunity to it. Indeed, non-invasive saliva sampling may be an alternative cost-effective method for improving the sensitivity and accuracy of large-scale detection of COVID-19 virus and/or immunity, hence significantly decreasing the risk for medical professionals and patients. inflammatory cytokines could be potential diagnostic and prognostic biomarkers for COVID-19 disease (Figure 3) . A salivary test would be particularly important for improving the effectiveness and efficiency of prevention strategies for healthcare professionals, especially when performing aerosol-related procedures. Indeed, an ideal saliva test would be a disposable "off-the-shelf" device that could be used at home by individuals, without exposing them or others to a potential environmental virus infection risk. Saliva has a role in human-tohuman transmission via bioaerosols and droplets. Salivary proteins and anti-SARS-CoV-2 antibodies, viral particles, EVs, and infected host cells can be potential diagnostic, prognostic, and COVID immunity monitoring biomarkers, for both symptomatic and asymptomatic patients. EVs: extracellular vesicles; ELISA: enzyme-linked immunosorbent assay. Saliva has a role in human-to-human transmission via bioaerosols and droplets. Salivary proteins and anti-SARS-CoV-2 antibodies, viral particles, EVs, and infected host cells can be potential diagnostic, prognostic, and COVID immunity monitoring biomarkers, for both symptomatic and asymptomatic patients. EVs: extracellular vesicles; ELISA: enzyme-linked immunosorbent assay. Funding: This research received no external funding. The authors declare no conflict of interest.
tions. 2 An additional objective was to follow SeM antibody titers out to 7 months after infection to determine immunoglobulin decay and to monitor for development of additional complications. We hypothesized that the magnitude of SeM antibody titer after infection (SeM titer ≥1:12 800) will be useful to monitor for the presence of complications or for the risk of development of complications. The horses that were euthanized were all complicated cases, including metastatic abscess formation, infarctive purpura hemorrhagica, secondary pleuropneumonia, and dysphagia. This reported outbreak is novel because there was a high proportion of horses with SeM antibody titers ≥1:12 800 (33%). There was also a high proportion of horses with complicated disease (29%) compared to reported complication rates (2%-20%). 1, 2, 4 In this outbreak, case fatality in complicated cases (50%) was high compared to reported case fatality in complicated cases (up to 40%). 2 Lastly, there was a high proportion of horses developing persistent GP infection (43%); this is higher than previous reports (up to 10%) 2 but consistent with a more recent report (up to 40%). 1, 4 Possible explanations for the higher complication rates and higher convalescent SeM antibody titers include a higher dose of exposure, a more virulent S. equi strain, 10, 11 Lastly, 13 horses were not tested for carrier status and were presumed not to have persistent GP infection; however, this could have led to misclassification. At initial testing for carrier status, some horses were tested only once with nasopharyngeal lavage largely based on economics; however, the current recommendations for detection of carrier status are endoscopy and GP lavage. 1, 4 This limitation could have led to underestimating the number of horses with persistent GP infection in this outbreak. It could have also led to overestimating the sensitivity and specificity of SeM antibody titers for detecting persistent GP infection as all of these horses had a SeM antibody titer ≤1:3200; however, this outbreak and other studies have indicated that SeM titers are not useful in this manner. 1, 8, 9 This outbreak illustrates the utility of SeM antibody titers after infection with S. equi. This study demonstrates that a horse may have complications of strangles without a SeM antibody titer ≥1:12 800 and that a horse may have a SeM antibody titer ≥1:12 800 without complications. A convalescent SeM antibody titer ≥1:12 800 warrants additional investigation for complications or persistent GP infection but does not necessarily confirm a horse has complicated disease. Results of this study were presented as an abstract at the 2018 American College of Veterinary Internal Medicine Forum, Seattle, WA. There was not off-label antimicrobial use. Authors declare human ethics approval was not needed for this study. https://orcid.org/0000-0003-0259-7210
health services. However, the disproportionate effects of COVID-19 on the most disadvantaged, especially BAME people placed at risk by their social and economic conditions, were entirely predictable. Mental health is best ensured by urgently rebuilding the social and economic supports stripped away over the last decade. Governments must pump funds into local authorities to rebuild community services, peer support, mutual aid and local community and voluntary sector organisations. Health care organisations must tackle racism and discrimination to ensure genuine equal access to universal health care. Government must replace highly conditional benefit systems by something like a universal basic income. All economic and social policies must be subjected to a legally binding mental health audit. This may sound unfeasibly expensive, but the social and economic costs, not to mention the costs in personal and community suffering, though often invisible, are far greater. Any reports and responses or comments on the article can be found at the end of the article. The views expressed in this article are those of the author(s). Publication in Wellcome Open Research does not imply endorsement by Wellcome. There has been much discussion about the mental health implications of coronavirus disease 2019 (COVID-19) -both of the pandemic itself and of the 'lockdown'. Many have predicted short, medium, and long-term mental health problems. There is some belated recognition of the crucial role of social inequality, and the disproportionate toll born by the most disadvantaged groups in society. However, the main emphasis has been on expanding access to specialist mental health services to cope with an anticipated surge in mental health problems. As members of the Society and Mental Health COVID-19 Expert Group, hosted by the Centre for Society and Mental Health at King's College London, we argue that there is an urgent need for an alternative approach. Some surveys have reported increased levels of anxiety and sadness and attributed those to the pandemic 1,2 . These are normal and understandable responses to situations involving threats and disruptions to habitual forms of life; the curtailing of social contacts and increased social isolation; and encounters -both actual and virtual -with sickness and death. Though undoubtedly distressing, for most people these are not symptoms of mental disorder and will not lead to enduring mental health problems requiring specialist therapeutic intervention. As successful public health interventions during previous crises have shown, the most effective support for those who experience such distress is practical. This includes information to support immediateproblem-solving, assistance with everyday tasks, ensuring financial and housing security, maintaining trust by openness and honesty, and, crucially, the (re)building of community infrastructures and informal social support networks 3 . But when it comes to mental health, as with so many other dimensions of COVID-19, we are not 'all in it together'. As so clearly shown by a whole body of evidence on the social determinants of mental health, the greatest risk of developing serious and enduring mental distress will fall upon those already impacted by social inequality, and this will be exacerbated by the current crisis and its aftermath 4 . Elevated risks of poor psychological wellbeing for the already vulnerable are linked to isolation, economic stress, stigma, racism and social exclusion 5 which will be exacerbated as resources are further diverted by COVID-19 responses. Further, we know that physical and mental health are interdependent and entwined, and thus mental health will be affected by the experience of COVID-19 6 . There are clear gender implications of COVID-19, and while reports have largely focused on the increased mortality among men, there has been almost no attention to the double burden that the lockdown has imposed on the mental health of women from the most disadvantaged communities many of whom have increased domestic responsibilities while at the same time being obliged to continue paid employment often in front-line jobs. Those experiencing the greatest social disadvantage are thus most likely to suffer the worst mental health impacts, and those with pre-existing mental health conditions may experience a deterioration in their mental health exacerbated by a further reduction in levels of social support available to them. In our view, such evidence from the social sciences, which is born out by the knowledge of those with lived experience of mental ill health, should have been central to pandemic preparedness planning. We believe that it must now urgently be deployed to identify the places and communities that need most support. Resources must be rapidly, preemptively and unconditionally directed to address immediate material requirements, and strengthen both informal and formal support networks. Interventions such as those proposed by Holmes et al. 7, 8 based in psychology, psychiatry, pharmacology, genetics, molecular biology, neurology, neuroscience, cognitive sciences, computer science, and mathematics will be ineffective if they do not address the underlying social causes of mental ill health. Immediate action should be taken to tackle the conditions that impact directly on the most socially excluded, especially Black, Asian, and minority ethnic (BAME) communities. These include poor and overcrowded housing conditions; the experience of racism, xenophobia and violence; obesogenic, degraded and polluted environments; financial insecurity, callous conditional welfare benefits; precarious work, exposed conditions for front line workers in care homes, transport workers, delivery drivers, warehouse packers and taxi drivers; children's education damaged by schools impoverished by a decade of financial restrictions and lack of access to the resources for digital education, and community facilities hollowed out by a decade of austerity. Hasty policies, such as the curtailing of the rights of mental health patients to proper assessments before involuntary detention as included in the Coronavirus Act 2020, should rapidly be reversed. The social realities impacting mental health will not disappear when lockdown eases. They will only be intensified as the economic consequences of the pandemic play out. We welcome the publication of the Public Health England review of Disparities in the Risk and Outcomes of COVID-19, which shows very clearly the impact of COVID-19 on those most socially disadvantaged 9 , and note that our argument is supported by the belated publication of the literature reviews and especially the stakeholder input 10 . The epidemiological evidence confirms that excess burden of COVID-19 born by those from Black and minority ethnic backgrounds is largely accounted for by the dimensions of social disadvantage that we have noted, and this is powerfully reinforced by the contributions of community organizations and mental health service users. If we are to implement policies which bring about progressive and transformative improvements in the mental wellbeing of our most disadvantaged communities as we enter the next phase of recovery from the pandemic, it is critical that the expertise of social scientists, and of those with lived experience of mental ill health, play a key role in policy development and implementation. This evidence on the social substrates of poor mental health has important lessons for the short, medium, and long-term policies needed to mitigate the transition of understandable distress to significant and enduring mental health problems. Mental health and well-being is enhanced by elevated social solidarity, informal social support, mutual aid and mutual innovation in relation to crisis conditions 11 , by measures to increase equality 12 , and by providing the resources necessary for the realization of capabilities 13, 14 . As we set out in Table 1 , to create "the optimum structure for mentally healthy life" 7 we must harness Table 1 . Mental health for all -building back better, building back fairer. Introduce mental health audits and inequality impact assessments of pandemic and post-pandemic policies across all sectors. Replace conditional welfare support with unconditional measures that promote capabilities for the most disadvantaged, such as free, accessible public transport. Ensure sustained adequate support for children from disadvantaged families being 'home schooled' including access to meals, breakfast clubs, facilities for internet access and resources for digital education. Design economic policies to maintain a strong safety net of income security, particularly within the most traditionally vulnerable groups, including a -recovery-basic income package which will support all, including the most financially disadvantaged. Ensure equality in access to health services by taking immediate and effective action to tackle institutional racism and to promote anti-racist and inclusive decision-making and practice. Address gender-based discrimination and promoting equal access for lesbian, gay, bisexual and transgender people and people with disabilities. Rapid investment to support mutual aid, community groups and voluntary sector organizations decimated by a decade of austerity, with an emphasis on; women's refuges, homeless charities, community-based support by and for black and minority ethnic people. Rapid investment in local community facilities and services -local authority 'community and voluntary sector organizations -across a range of health and social sectors. Reverse the rolling back of service users' rights to health and social care services that occurred in pandemic legislation. Re-Invest in community mental health teams, rebuild public mental health infrastructure and community mental health services. Provide resources to support service user and survivor, carer, mutual aid and self-help groups. resources from sociology, anthropology, geography, politics, and economics to inform rapid policy innovation, alongside legal changes, which will, on the one hand, address the fundamental social causes of mental ill health, and, on the other, create the social conditions that maximize human well-being. The fault-lines in British society have been starkly disclosed by the pandemic. To 'build back better' in the long aftermath of COVID-19, we need to create the social and material environments that not only address the causes of mental ill health but also enhance the capabilities of all citizens to create lives of meaning and purpose for themselves. No data is associated with this article. Yes Yes Competing Interests: No competing interests were disclosed. Reviewer Expertise: Mental health services research. I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
The world is in the midst of the most severe pandemic in living memory. Scientists dubbed the pandemic's source "severe acute respiratory syndrome coronavirus 2" (SARS-CoV-2), but it is more commonly referred by the label assigned the disease it causes: coronavirus disease 2019 or "COVID-19". COVID-19 spread rapidly at a historic scale and with unprecedented impacts. Although milder symptoms include fever, aches, dry coughing, and shortness of breath, COVID-19 poses life-threatening conditions, ranging from respiratory failure to multiorgan disfunction. Older adults and those with pre-existing conditions (e.g., asthma) are at higher risk for the more severe impacts. However, everyone is susceptible, and anyone can contract and spread the disease. The numbers of COVID-19 patients seeking medical care have strained entire healthcare systems worldwide. In many locations, outbreaks of COVID-19 have overwhelmed hospitals and healthcare professionals. Moreover, the effects go far beyond those felt by healthcare systems; they stretch across virtually every sector of society-from food systems to educationand have debilitated economies. Societies rely on health sciences and medicine to forecast the pandemic's trajectory, to accelerate development of vaccines, to explain the situation to a worried public, and navigate the myriad of related health-related decisions. However, addressing the COVID-19 pandemic and its effects on society requires more than the actions of healthcare and medical professionals alone. It calls for engagement of citizens, governments at all levels, and a diverse array of organizations and individuals involved in policymaking processes and policy implementation. Questions, thus, arise about the role of the policy sciences in comprehending such a crisis. Lasswell (1956a) envisioned the policy sciences as providing insights into such situations, challenging and informing ongoing processes and decisions, and foretelling of future scenarios, all with the intent of steering government and society toward greater human dignity for all. Since the formulation of this vision over seven decades ago, the policy sciences have evolved into a vibrant field of scholarship, marked by conceptual richness, theoretical diversity, and methodological pluralism (Cairney and Weible 2017; Torgerson 2017) . This commentary capitalizes on the diversity and follows in the footsteps of Lasswell's vision by responding to the following question: What insights do the policy sciences offer to help us understand the pandemic ? We answer this question using ten policy perspectives that are featured in the policy sciences literature. These perspectives draw inspiration from Lasswell's (1956b) comprehensive portrayal of the functional elements that shape public policy. This requires going beyond analyzing any single aspect of public policy or a specific policy decision and understanding the dynamics of the processes, actors, and interactions that shape policy decisions in response to COVID-19. These include perspectives on policymaking (within country), crisis response and management, global policymaking and transnational administration, policy networks, implementation and administration, scientific and technical expertise, emotions, narratives and messaging, learning, and policy success and failure. The conventional conception of public policy casts it as encompassing both decisions and non-decisions of governments. As a reflection of societal values and priorities, public policies can take a "traditional" form, such as law, regulation, executive order, local ordinance, and court decision (among others). They can also take the form of on-the-ground regularized choices by frontline bureaucrats. In all these forms, public policies represent priorities of a society and they, in turn, shape society. COVID-19 has spawned a surge in the number of public policies adopted, the forms in which they are adopted within and across governments, and with the range of their designs and contents. Most countries have closed or restricted their borders and restricted travel within borders. One-third of the world's population has been subjected to some social restrictions (from school closures to stay-at-home orders). These policy decisions exist across levels of government. For example, some occur at the national level, such as the world's largest lock down targeting India's 1.3 billion people, or at the subnational or local level, such as California's state-law to prohibit evicting tenants of commercial property. 1 In examining this surge of policy change through the lens of the literature on policymaking, a few lessons emerge. Governments adopt public policies through different pathways Supporting the literature on policy change (Weible and Sabatier 2017) , the pathways to policy change during COVID-19 include: (1) learning, as demonstrated in the UK's shift from mitigation (partial closures) to suppression (strict lock downs) following projection of the infection and death consequences of the former (Walker et al. 2020; Hunter 2020) ; (2) negotiated agreement, as illustrated by the passing of stimulus packages around the world, including the USA (Werner et al. 2020) , Canada (Bolongaro 2020) , and Japan (Kyodo 2020); and (3) diffusing and transferring ideas across governments, with many drawing lessons from South Korea's widespread testing and China's strict quarantining. Policy decisions are further conditioned by contextual factors, including institutional (e.g., constitutional and legalistic structures) factors, cultural orientations, economies, and political styles (among others). For example, Sweden's response to COVID-19 has thus far avoided many of the lockdowns of other countries, a response that has been partially attributed to a culture of trust and responsibility. Finally, prompting all of these changes is the shock of COVID-19 itself, which directly affects healthcare systems worldwide, but indirectly affects other policy areas, for example, by postponing welfare reforms, environmental policies, and other actions deemed "non-essential". 2 Uncertainties exist regarding the duration and termination of policy decisions While we are experiencing a surge of policy change aimed at reducing immediate societal threats, there remains great uncertainty regarding which of these changes will remain permanent and which will be terminated. This includes questions about how they will be terminated (phased or immediate) and the political consequences of reversing decisions that increased welfare benefits to cope with the immediate crisis. Government non-decisions become just as important as decisions Alongside the decision to take policy action is the choice to not act or delay action. These can be witnessed in information reporting delays, such as China not reporting human-to-human COVID-19 transmission (Madrigal and Meyer 2020) , and deliberate value-based choices, as illustrated by President Trump's decision to rely on political pressure and markets over immediate activation of the Defense Production Act to produce and distribute needed medical supplies across the USA (Peres 2020). Crisis management scholarship describes and explains societal actions in response to situations where there is a threat to core values, urgency to take action, and uncertainty concerning the situation and courses of action (Rosenthal et al. 1989) . These conditions bring crucial leadership challenges associated with decision making, public information, sense making, accountability, learning, and reform (Boin et al. 2005 ) but also require broad collaboration and coordination involving multiple individuals and organizations. Crisis response and management shares an immediate interdependence with (1) public policies, including the content of previously and newly adopted public policies, (2) the interactions of individuals, groups, coalitions, and networks, and (3) contextual conditions, including income levels, local interactions, and global-level decisions. Responses occur at strategic and operational levels Crisis response and management occurs at two levels (Boin and 't Hart 2010) . The operational level refers to on-the-ground decisions and behaviors and includes medical personnel, epidemiologists, emergency managers, and other professionals coping with the pandemic's immediate threat. The strategic level includes political-administrative leaders that carry political responsibility and make strategic decisions, provide public accounts of events, and support coordination and collaboration. The ongoing need for adjustments in crisis response and management in the face of evolving circumstances and events require continuous engagement from both levels. Mitigating value conflicts spark public controversies and blame-games During complex crises, multiple values are at stake simultaneously and decisions must be immediate. For the COVID-19 pandemic, one of the choices has been between mitigating versus suppressing COVID-19. Such choices impose different social and economic costs and benefits and raise important questions about how we value those costs and benefits. Given heightened public attention and policy impacts across society, most policy decisions (and non-decisions) are heavily scrutinized and politicized through framing contests and blame-games (Brändström and Kuipers 2003) . Examples include debates around the Swedish strategy to ensure a slow spread of the virus (Henley 2020) and conflict in Brazil between state governors and the president over the best approach to tame the epidemic (Reuters 2020) . Other governments, such as New Zealand (Roy 2020), Ireland (Power et al. 2020) , and Iran (Karimi and Batrawy 2020), have been publicly criticized for doing too little too late. These experiences challenge the notion that policy conflicts can be temporarily suspended in times of crisis, with political opponents rallying around the proverbial flag until the worst is over. Indeed, there is a strong possibility that, while some policy conflicts will wane, others will wax as opportunities for political gain manifest and divisions emerge between those who support or oppose a government's response. Transboundary crises can both spur and challenge collaboration Transboundary crises span functional areas and/or multiple jurisdictions over time, while posing novel governance challenges (Boin 2009; Bynander and Nohrstedt 2020) . International collaboration flourished in response to COVID-19, channeled through the epistemic community of epidemiologists, virologists, and pharmacologists. Such collaboration is enabled by a global network of state agencies, private interests, and international institutions in efforts to coordinate public information activities and global research priorities (Mesfin 2020) . In this transboundary crisis, countries exchange data and experiences to learn about the virus and its effects. High-level officials meet regularly to discuss travel bans, trade, and undertake joint actions to dampen economic impacts (Khan 2020) . Meanwhile, many potential pitfalls plague the pursuit of such collaboration. Ample examples illustrate of how communication failures, political values and identities, and weak mandates can undermine efforts to achieve a collective crisis response (Boin and 't Hart 2010) . Global policy processes refer to "a set of overlapping but disjointed processes of public-private deliberation and cooperation among both official state-based and international organizations and non-state individuals around establishing common norms and policy agenda for securing the delivery of global public goods or ameliorating transnational problems" (Stone and Ladi 2015, 2) . Transnational administration is directly related and concerns "the regulation, management and implementation of global policies of a public nature by both private and public individuals operating beyond the boundaries and jurisdictions of the state, but often in areas beneath the global level" (Stone and Ladi 2015, 2). Self-evidently, the spread of COVID-19 presents a global policy problem but arguably has not (yet) become subject to transnational administration. Inequalities drive differential impacts of policy responses, which, in turn, exacerbate inequalities Space for self-isolation is unaffordable in slums. Individuals have different possibilities to return home when business shut down, as illustrated by the situation in India, where thousands of migrant workers were stranded in wake of the lockdown (Abi-Habib and Yasir 2020). The pandemic also compounds inequalities between the so-called Global North and Global South, where "basic handwashing facilities are not available for 40% of the world population, let alone soap or hand sanitizers" (Racalossi de Moraes 2020). Destabilization and reinforcement of global policy processes COVID-19 could lead to greater "de-globalization", a return of big government, and quite possibly more authoritarian government. Regional integration could be slowed, as seen in the European Union, where nation-states initially closed borders and prioritized national responses. However, the European Union also shows the continuation of collaboration in ensuring stability of inner markets and joint planning for the economic crisis. Similarly, while many efforts at international cooperation have been shaken, others are strengthened, including the ongoing exchange among experts of COVID-19 data (Varnum 2020) . Public sector interventions, such as development projects and programs designed around global norms, could be at risk; this includes those expressed in the Sustainable Development Goals (SDGs). Additionally, traditional international organizations, such as the World Health Organization, have gained (or will gain) a legitimacy boost, alongside international bodies, including university centers for global health security, and events such as the World Health Summit. Uncertainty about the locus of authority and influence of global professionals The policy communities that form around global health policy or the pandemic response include experts, bureaucrats, diplomats, consultants, and other professionals highly experienced in their policy sectors and international cooperation. However, the idea that public administration and decision making rests in the hands of professionals who work through international arrangements outside or beyond the accountability structures of established nation-state institutions is deeply disconcerting for those who believe such dynamics are anti-democratic and lead to unaccountable "global elites" (Stone 2019 ). Yet, overcoming COVID-19 rests with these professionals, illustrating the tension between effectiveness and accountability in transnational relations. Swirling around all policy decisions and their implementation are policy networks (Marsh and Rhodes 1992; Jenkins-Smith et al. 2018) , generally defined as entities seeking to influence policy, their relationships, and related outcomes. Policy networks include political parties, public agencies, elected offices, interest groups, non-government organizations, academia, think tanks, and many more. These entities relate to each other through a variety of ties important in policymaking, such as information and resource exchanges, collaboration, trust, and ally/enemy relations. Policy networks react and contribute to the shifting of attention to policy issues and changing of government agendas The COVID-19 pandemic signifies a sudden and drastic shift in what issues policy networks pay attention to and, therefore, changes in agendas of many government decision-making venues, such as legislatures and parliaments. For example, Switzerland's parliament broke up its spring session and tabled other issues, such as climate change and pension reforms. By shifting foci on policy issues and changing agendas, there have also been changes in policy conflicts and in the relationships among people on different sides of policy issues. For example, policy networks in the context of COVID-19 have focused more on the fundamental purpose of the policy issue area, whether that is to educate children or deliver food to grocery stores, and less on issues of secondary importance. Prior policy networks condition policy and societal responses Many of the responses observed in COVID-19 reflect the vulnerabilities and strengths of prior policy networks, as well as emergent relationships (Bodin et al. 2019) . For example, the stable and resilient policy networks that include national and subnational governments in Switzerland have been blamed for contributing to the country's slow pandemic response. However, Switzerland's slow response has also been attributed to its consensus-based and decentralized system of governance, which takes time to align top-down measures with growing awareness and fear among the people. Changes in importance of policy networks' people and organizations, relations, and resources Once established, policy networks have been shown to be relatively stable (at least in organizational representation), with regularized patterns of interactions (Jenkins-Smith et al. 2018) . Some of these policy networks have been altered in the wake of the COVID-19 response. This includes making some relations superfluous and others essential, elevating the centrality of some entities (such as public authorities and experts), and pushing other entities to the periphery (such as political parties and associations). For example, in federalist countries, addressing the COVID-19 pandemic stresses the interplay between national and subnational authorities. In the USA, this is evident with New York Governor Cuomo's political rise in his ongoing tussles with President Trump over the gravity of New York's situation and the role (or lack thereof) of the federal government in supporting the state's mitigation efforts (Enton 2020) . In Switzerland, some cantons have circumvented central decisions by taking stricter measures than those introduced by the national government. During periods of crisis and high uncertainty, the demand for scientific and technical expertise increases as governments and the public search for certainty in understanding problems and choosing responses. This creates a need for what is perceived as evidence-based policymaking, which signals to the public that decisions are being made based on reasoned and informed judgments that serve the public good, rather than special interests (Cairney 2016 ). Yet, scientific and technical experts also serve to inform, legitimize, and justify government responses to problems, even as political considerations and normative orientations continue to dominant such choices. The result is a simultaneous increased reliance on scientific and technical experts and politicization of scientific and technical information. Scientific and technical experts become more central in policy responses to uncertain problems Before the pandemic emerged as a global crisis, a community of scientific and technical experts existed in areas including epidemiology, virology, public health, and medical sciences (Haas 1992 ). Without much public or political exposure, this community of experts has forged ahead in their advancement of public health knowledge in relation to pandemics. While these experts do not necessarily agree on all aspects of their expertise, they share foci, vocabularies, and methodological and theoretical orientations. The COVID-19 pandemic has suddenly elevated this community into the world's public and political spheres. Their vocabulary, for example, has entered the public lexicon, including words and concepts, such as "pandemic", "quarantine", "flattening the curve", "social distancing", "personal protective equipment" (PPE), and "coronavirus" (Shepherd 2020) . These scientific and technical experts have become part of decision-making processes, as their names and images join political leaders as the face of how governments respond, notably illustrated by President Trump's shared press conferences with Anthony Fauci, director of the country's National Institute of Allergy and Infectious Diseases. Governments invoke scientific and technical expertise to inform and legitimize problems, responses, and evaluations One of the fundamental purposes of scientific and technical information is to inform and legitimize governments' choices-especially in high-stake situations. The notion is that evidence is the basis for sound policy decisions. Scientific and technical experts become part of the rationale of governments' responses and serve as a means to reassure the public (Orange 2020 ). An increased demand for evidence-based policymaking also challenges experts (who need skills to simplify and communicate technical information) and policymakers (who need to balance political judgment and responsibility in the use of scientific and technical information). Scientific and technical expertise can obscure accountability of decisions As scientific and technical experts help inform and legitimize decisions, they also obscure responsibility for policy responses and outcomes. Scientific and technical experts can help specify the severity of COVID-19 in a population, project its trajectories over time, and estimate the likely effects of different policy responses, from mitigation to suppression. Yet, formulating and adopting policy responses is the responsibility of government leaders. As scientific and technical experts become more prominent in the policy process, who is accountable for policymaking becomes more obscure. Policymakers rely on scientific and technical information to inform and legitimize their decisions. This reliance has contributed to an image of science as distinct from emotions, while emotions conjure images of spontaneity or irrationality. This image of emotions affects its role in policy processes, often placing it in what have been understood as the "emotional spheres of life", such as the home, intimate situations, and personal feelings (Stone 2013 ). Yet, emotions are part of the policy process and used strategically to shape public policy responses and effects on society (Durnová 2019) . Governments appeal to emotions to help legitimize policy responses and steer public reactions We see government officials referring to 'fear' of the rapid spread of COVID-19, as much as we observe emphases on 'trust' in fellow-citizens to comply with imposed policy measures. We see extensive references to "anxiety" regarding insufficient health resources to contain the pandemic, regarding social isolation, and regarding the general uncertainty about how long all this will last. Indeed, policymakers seem to have legitimized their policy choices through the emotional needs of the citizenry just as much as through perceptions of "objective" scientific evidence. Emotionally charged language can recall cultural and historical contexts By referring to COVID-19 as the "invisible killer" that "threatens" the UK, Boris Johnson linked fear with the unprecedented and uncontrollable, legitimizing the drastic reduction in personal freedom in the country. 3 Such portrayal is different from the words by Swedish Prime Minister Stefan Löfven, who described the virus as "testing our country, our society and us as human beings". 4 In Löfven's discourse, "testing" gives an image of hope and the explicit reference to "human" invokes a compassionate response by society. While Johnson speaks about "each of us", he places this pronoun in the context of a "huge national effort", enabling him to urge "the people of this country to rise to that challenge and…to come through…stronger than ever…as many times in the past". Through the reference to the past, he appeals to emotions of patriotism and national pride. The latter helps interpret one of Donald Trump's framings of the pandemic as"foreign" and its spread as "cases entering our shores" (Kessler and Rizzo 2020). 5 Trump strengthens this frame through the extensive use of military vocabulary, as when he described the virus is something to be "defeated". Iranian Prime minister Ali Khamenei uses a similar framing by claiming the virus "comes from the US" and could even be "manipulated" by them. This framing helps to legitimize the Iranian Government's limited ability to deal with the pandemic, as it links emotions of anxiety around COVID-19 to the anxiety around major geopolitical conflict. 6 Policy responses force a reevaluation of the emotional spheres in societies Policy responses to the pandemic have rippled through societies, including into the homes and lives of citizens. National lockdowns, for example, have elevated the necessity of homeschooling, self-care in response to long isolation, and a need for psychological consultation online services, such as those responding to the rise of home violence. 7 These are examples of how policy responses to the pandemic have entered the emotional spheres of the global citizenry and pose novel challenges to short-and long-term government efforts (Jupp et al. 2016; Durnová and Hejzlarová 2018) . The policy sciences focus attention on the messages and messengers that aim to influence decision-makers in government or the public (Crow and Jones 2018), which often include elements of emotions as described above. These messages can influence individual risk perceptions and risk reduction responses during a crisis like the COVID-19 pandemic. Understanding risks is key to persuading people and their governments to do something in the face of uncertainty and crisis. They need to know what the risk is, how bad it is, and what they need to do to reduce their risk or help the collective effort. Understanding these risks can be difficult for many and persuading people to change their behavior can be even more challenging-even with the best communications approach. Governments generally act with three points about narratives and messaging in mind during a crisis. Governments attempt to provide sufficient information in a timely manner to the public China's initial response wherein the government failed to notify the public and global community about the nascent outbreak (Yuan 2020) falls at one end of the spectrum. At the other end of the spectrum, several US state governments hold daily briefings with media access and live coverage (Barnello 2020)-largely due to the failures of the federal government. In one early example, Ohio Governor Mike DeWine began holding daily briefings alongside his Heath Director before many other states. He was also the first US governor to bluntly warn about school closures: "So we've informed the superintendents, while we've closed schools for three weeks, that the odds are this is going to go on a lot longer and it would not surprise me at all if schools did not open again this year" (Anderson 2020) . Governments attempt to provide information that is accurate and non-contradictory to the public Just as important, there is a spectrum of observed government approaches to providing consistent and accurate information. For example, Taiwan's, Singapore's, and South Korea's governments acted swiftly to provide residents information and testing (Apuzzo and Gebrekidan 2020) . In contrast, the US government has provided haphazard and contradictory information (Lopez 2020) , affecting the public's trust and reactions (Sanders 2020) . President Trump has contradicted his own public health experts numerous times, sowing confusion about the virus's severity and characteristics (Abadi et al. 2020 ). On the other hand, many US governors-including New York, Ohio, Colorado, and California-have been praised for their consistent approach to providing information about the outbreak in their states. Governments can spawn controversies by engaging in speculations Governments can create confusion and conflict through speculation and dissemination of false information. For example, President Trump lauded the potential of the drug chloroquine to counter the novel coronavirus. As reported in CNN, "Health officials in Nigeria have issued a warning over chloroquine after they said three people in the country overdosed on the drug, in the wake of President Trump's comments about using it to treat coronavirus" (Busari and Adebayo 2020). Various strands of research in the policy sciences have recognized that learning plays a critical role in our ability to understand, influence, and address complex policy issues. Learning can bring new issues to light, challenge previously held beliefs, and help identify innovative policy responses. In democracies, processes that facilitate learning, such as stakeholder dialogue, are often valued for the potential to bring diverse forms of knowledge-whether scientific, experiential, or value-based-into policy decision making. Given the importance of learning, and the challenges associated with it, numerous scholars have sought to diagnose learning in policy contexts (Heikkila and Gerlak 2013; Moyson et al. 2017) , including learning around crises . Urgency triggers learning from others' experiences The pandemic illustrates intra-crisis learning, including how experts and decision-makers continuously review and update policy responses as new knowledge becomes available (Moynihan 2008) . The time lag between countries' experience with COVID-19-particularly in societies that were affected early, including China and Italy-provides other countries an opportunity to monitor the pandemic and evaluate policy responses, as a basis for their own responses. We also see evidence of learning in a variety of domains and scales of policymaking: from local leaders who learn from public health agencies on the extent and impact of the virus in their communities, to parents learning from each other how to co-produce their children's education from schools (Darling-Hammond 2020). Learning manifests in different ways Learning can take various forms: as updates to our understanding of instrumental or technical aspects of a policy problem, as changes to our underlying policy beliefs or values about societal priorities in responding to problems, and as fundamental alterations to the institutions that target these problems. Instrumental learning around COVID-19, for instance, has occurred regarding how long the virus can linger on surfaces, leading to closures of many public and private buildings. Influencing our value orientations, the COVID-19 crisis has brought attention to underlying social dilemmas that make people either more vulnerable to the virus, or vulnerable to the efforts to stop it. We also see evidence of learning about the strength and vulnerabilities of the institutional rules structuring our governments and their efforts to tame the pandemic. Sweden, for example, passed a new bill to empower the national government to close temporarily schools in the nation, which was previously a municipality-level responsibility. 8 Different barriers inhibit learning In the case of the COVID-19 pandemic, learning is potentially constrained by several issues: the immediacy and urgency of the crisis, popular demands for forceful action, limitations in technical knowledge, and politicization (Stern 1997) . This raises questions as to whether we are learning the right things and whether the right people are learning. Many of our policy choices reflect a "muscle memory" from the past to guide us through the crisis until we can pause and reflect, allowing for deeper forms of learning. With COVID-19, we have some experiences to draw on, as illustrated in the USA ensuring oversight in the relief bill, building in part from perceptions of what the 2008 stimulus package lacked (Woodruff 2020) . However, the novelty of COVID-19 may also prevent learning opportunities from the past to guide us (Brändström et al. 2004 ). At the same time, in the face of a crisis we may be even more inclined to look to those who are most like us, politically and ideologically, for lessons. For instance, across subnational governments in the USA we have seen differing approaches to lockdown policies that correspond closely with political ideologies (Adolph et al. 2020 ). Public policy is not self-enacting; rather, administrative actions bridge a government's intent to do something (policy) and the real-world impacts of that intent. Crises such as the COVID-19 pandemic demand swift and coordinated action that adapts fluidly to conditions-"contingent coordination" in the words of Kettl (2003) . Such coordination generally spans different agencies and across levels of government. Furthermore, as devolution and privatization of public services have shifted critical administrative functions to disparate entities both within and outside of government, policy responses to even simple emergencies call for joint action between government organizations, nonprofits, for-profit enterprises, and individuals. Every aspect of implementation shapes how public policy takes place "on the ground"-from how administrators interpret policy directives to the way front-line personnel operationalize them. Administrative fragmentation and decentralization complicate implementation Pandemic response requires interagency collaboration across fragmented bureaucratic structures and distinctive organizational cultures. In the USA, for example, the Federal Emergency Management Agency (FEMA) needs access to critical Health and Human Services (HHS) information, while directing agencies such as the Army Corps of Engineers to set up emergency medical infrastructure and the Department of Transportation to maintain supply chains. Meanwhile, administrators in state, local, and tribal governments look to agencies such as HHS and FEMA for direction and assistance. Although the goal is streamlined hierarchical coordination, power struggles between levels of government are just as likely (Lester and Krejci 2007) . Administrators face additional challenges in coordinating with nonprofit and for-profit partners. Absent formal mechanisms of control, they must leverage indirect measures. For example, administrators have devised credible commitments with for-profit and nonprofit hospitals to encourage them to forgo elective surgery revenues (by canceling procedures), which creates additional capacity for treatment of COVID-19 patients. Governments' reliance on nonprofits to not only deliver essential public services, but also subsidize government funding of them, is on full display in the midst of the crisis. Nonprofits are seeing unprecedented demand for their services while facing the financial implications of the pandemic's impact on the economy. As described by Goodwill CEO Steven Preston (as cited in Associated Press 2020): "The financial impact of the crisis has put the very survival of many essential service providers at risk…[nonprofits] are our society's shock absorber when crisis hits". Front-line workers exercise discretion and self-regulation Front-line personnel rely on discretion to develop routines, norms, and creative strategies as a means of coping with the often unreasonable responsibilities assigned to them (Hupe 2013) . Heuristics and workarounds are particularly relevant in the pandemic. Examples include tragic accounts of the revised triage frameworks physicians apply to manage staggering infected patient numbers and the solutions that hospital staff devise to address shortages of critical medical equipment, from facemasks to ventilators. Co-production requires overcoming collective action challenges The pandemic calls on citizen co-production (Voorberg et al. 2015) in the realization of policy goals on an unprecedented scale. "Social distancing" recommendations and "stay-at-home" orders ask residents to put aside their self-interests-from the comfort of group interactions to the critical desire for financial security-to reduce the virus' spread and "flatten the curve". Because many such policies are voluntary, levers for encouraging compliance with them oblige public servants to find ways to activate residents' civic sense of duty to comply and social pressures. Such efforts are likely to be more effective if they harness the popular legitimacy held by intermediaries, from civic organizations to for-profit companies, to exert normative pressures toward compliance. Policy sciences are often used to understand policy evaluation in more normal rhythms of policy cycles, strong evidence bases, and evaluative tools/techniques. However, COVID-19 has propelled it out of these normal rhythms by imposing extreme urgency, ambiguity, and value conflicts. Insights from the literature on policy success and failure-with its extension to the crisis management domain (McConnell 2011)-provide a useful starting-point for assessing policy making under such extreme conditions. Who is affected and to what extent influence frames of success or failure Policy decisions are likely to benefit some populations and harm others. Banning international flights into a country may be successful for the health of a national community, but not from the vantage point of families stranded overseas who cannot return. There is also ambiguity when it comes to the extent of success or failure, such as assessing the proportion of a population being tested, being infected, recovering, and dying. Challenges surmount when data supporting these assessments are absent and are considered at different times. Success or failure judged as part of decisions, processes, and politics Crisis decisions focus on public policy and can be evaluated based on containing threats, minimizing damage, and restoring order and stability. Crisis processes can be evaluated against the criteria for adherence to processes relevant to resolving the crisis at hand (from activating plans to well-judged improvisation), to following a process that is legitimate, through following constitutional conventions or garnering legitimacy from key stakeholders. Crisis politics focus on success from the perspective of governments and can be assessed against reputational protection, enhancement, and popular support; ability to manage policy and political agendas with as little backfire as possible; and capacity to maintain long-term governance/ ideological visions. This threefold distinction helps capture many of the dynamics and tensions of how we assess responses to the COVID-19 pandemic. For example, a government may fail through initial reluctance to act on early warning signs about the potential risks of the virus (decision failure), but succeed much more in garnering political sympathies and support (political success) for its struggle in the face of adversity. A government may succeed in rushing through a series of draconian measures such as quarantine and lockdowns (process success) but face a backlash against the centralization of political power (political failure). It is possible to conceive of a spectrum from success to failure We may judge outcomes leaning toward the success end of the spectrum even when there have been shortfalls, such as when initial delays in ordering testing kits still lead to perceived success overall once testing kits arrive and high-volume laboratory processing occurs. Correspondingly, outcomes may also be judged as leaning toward failure, despite small gains and comforts, such as Italy's collapsing emergency healthcare, despite some lives being saved. In the middle of this spectrum is a mix of successes and failures, akin to a tug-of-war over perceptions of the outcomes related to crisis decisions, processes, and politics. Lenses and narratives shape perceptions of success and failure We will always view success and failure through lenses of values and other orientations (Lasswell 1970 ). If we feel the overriding priority for providing financial aid to the unemployed and low-waged, then we are unlikely to view bailouts for airlines as success. Adapting Bovens and 't Hart (1998) useful approach to COVID-19 is the assessment of the extent of success/failure, its causes (from mismanagement to inevitability) and implications for future crisis decisions (from refining existing directions to the need for dramatic change). This can be applied to the "whole of government" response or simply to one aspect of the responses. Multiple narratives and variations are possible, but we outline three hypothetical illustrations: • First is the success trajectory e.g., a reduction in the number of daily cases is the product of the early banning of international flights from China. Our successes were and will continue to be the result of pre-emptive action. • Second is the failure trajectory e.g., the current exponential rise in confirmed cases is the product of complacent political leadership, more interested in calming fears for the next election than in addressing very real threats. New thinking is needed to avert more unnecessary tests. • Third is the mixed trajectory e.g., the government has succeeded in slowing down the rate of new infections but hospitals still cannot cope. We cannot afford to be complacent and must channel additional funds into front-line healthcare. Lasswell (1956a) envisioned the policy sciences to be both relevant and timely. In this spirit, this commentary draws immediate reflections based on different perspectives of the policy sciences to understand the COVID-19 pandemic. The pandemic poses unprecedented challenges in its immediate need for action, global span, and magnitude of impacts. We write this at a time when the pandemic has not yet reached its peak; hence, we draw on early observations in a concerted effort to offer insights into the ways in which scientific and technical expertise, emotions, and narratives and messaging legitimize policy decisions and shape relationships among citizens, organizations, and governments. We demonstrate the varied processes of adaptation and change, including learning, surges in policy responses, shifts in networks locally and globally, implementing and administering policies in response to transboundary issues, and assessing policy success and failure. There are also understudied aspects of the policy sciences that deserve more attention in the aftermath of the COVID-19 pandemic. These include (but are not limited to) the following avenues of research: • The global response to the pandemic has heightened the need for renewed research not only on the surge of new policy decisions, but also on the effects of non-decisions and policy terminations. • Given the necessity for mass behavioral change to overcoming the pandemic, more research is needed to examine the relationship between crises and public responses. • The pandemic has further exposed economic and political inequalities in global policy responses, yet questions remain about how to mitigate these inequalities to support the world's most vulnerable. • The political response to the pandemic has altered priorities and, thus, the focus and intensities of policy conflicts, but the characteristics and permanency of these changes remain unknown. • The increased reliance on scientific and technical expertise in making policy decisions raises questions about political accountability in policymaking. • While much of our focus has been on the use of scientific and technical expertise in supporting policy decisions, we have not focused enough on the role of emotions and their effects on legitimizing decisions and achieving desirable outcomes. • Even though narratives and messaging are important, we still know little about how to construct and deliver them effectively to influence public behavior. • The pandemic has renewed attention to the importance of, and how little we know about, learning under stress and urgency in the middle of a crisis. • Given the necessity of linking mass responses and policy decisions, the pandemic reinforces the need to foster understanding in both public policy co-creation and co-production. • While we know base values and other orientations drive policy success and failure, questions remain about how to deal with the tradeoffs between them. This commentary also shows that the strength of the policy sciences lies in its capacity to provide general insights related to interactions between public policy and society. Of course, given breadth and depth found in the policy sciences, we make no claims that this commentary comprehensively draws from all its sources and relevant perspectives. We leave it to others to continue the conversation that we know will shape much of our research in the years to come.
Coronavirus disease 2019 is caused by the novel coronavirus (SARS-CoV-2) [1] . SARS-CoV-2 infection has spread all over the world and has resulted in 1,436,198 confirmed cases of infection and 85,522 deaths as of 9 April 2020 [2] . This condition poses an urgent public health issue in the world [3] . No specific treatment or vaccine is available against this virus [4] . Hence, a classic but effective method to stop the propagation is by cutting the spread chain by using personal protective equipment and limiting personal contact [5] . The incubation period of COVID-19 is 1-14 days, mostly 3-7 days, and its main manifestations include fever, dry cough, and fatigue. COVID-19 is transmitted person-to-person through respiratory droplets and close contact [6, 7] . Fragile patients such as patients with cancer and elderly persons comprise the frequently infected population with severe symptoms, such as dyspnea and/or hypoxemia, 1 week after the onset of the disease, and the prognosis is very poor [8] . Hence, more attention should be paid in coping with patients having blood diseases. Moreover, these patients usually develop infection other than COVID-19 with manifestation of fever because of immunodeficiency and/or myelosuppression after many cycles of chemotherapy [9] . COVID-19 infection should be distinguished from other microbiological infections for patients in hematology. Considering the COVID-19 exacerbation, the consequences are extremely serious for these fragile patients. Hence, potentially infected patients with COVID-19 should be identified quickly and isolated early [10] . Furthermore, health personnel should be protected from infection to provide the best possible medical services for patients and evaluate the outbreak risk in hospital [11] . Therefore, the prevention and control strategies for nosocomial infection in the hematology department should be discussed to prevent COVID-19 infection and severe consequences. In the present study, we shared our experience from the past 2 months in the hematological department and suggest preventive actions for the future. Overview of general measures about nosocomial infection prevention in the hematology department The hematological department of Zhongnan Hospital of Wuhan University constitutes an outpatient clinic and an inpatient department. The latter includes three units, namely, general, intensive care, and laminar air flow wards. Several measures have been implemented to prevent nosocomial infection in the hematology department, and an overview of these measures is shown in Fig. 1 . The inpatient department was reorganized in accordance with the request of nosocomial prevention and control strategies. Intensive care and laminar air flow wards were closed. Temporary isolation wards were planned with three zones and two aisles in case of a suspected or confirmed COVID-19 case [12] . Furthermore, the rules of sanitation and standards of operational procedures were fully implemented in different dimensions, such as health personnel, patient and accompany Yingying Wang, Jingfeng Li, Li Liu and Jianfang Li contributed equally to this work. management, local sanitation management including environment disinfection, medical facilities and equipment's sterilization, and medical and non-medical waste disposal. A workflow for the outpatient clinic management was also designed to exclude potential risk of these two kinds of patients carrying SARS-CoV-2, namely, infected patients without symptoms and patients in infectious incubation stage [13] (see Fig. 2 ). All patients were first received at the pre-check office, followed by temperature measurement and short investigation of COVID-19 epidemiology. Then, these patients were guided to fever clinic or specialist clinic for further consultation [14] . Once the patients were excluded for COVID-19, they were allowed to consult with the hematology clinic. Temperature was checked, and careful epidemiological history was inquired again before evaluating hematological problems. For patients who did not require admission for having no or mild symptoms, prescription was provided with a suggestion to continue online follow-up. For patients who needed hospital admission for further treatment, COVID-19 screening tests including chest CT scan, blood routine test, virus PCR, and antibody test were prescribed immediately after admission. Patients with positive findings were transferred to temporary isolation wards attending for expert consultation, and then transferred to the infectious disease department or designated hospital. Only the patients with negative findings could continue specific treatment with close temperature monitoring. Standard measures of hygiene for all staff and local environment were implemented according to the international suggestions and guidelines from the National Health Commission about nosocomial infection prevention and control [15] [16] [17] [18] . Moreover, additional measures with intensification were carried out for the management of health personnel and patients. Personal health status report with temperature check All staff provided daily report of their temperature and contact history with confirmed or suspected cases with COVID-19. The body temperature of staff on duty was checked before entering the ward. Strict implementation of standard prevention and hand hygiene Standard personal protection with surgical mask, cap, and gloves were applied in dealing with routine activities for all patients. Level 2 protection was implemented with additional isolation gown and protective mask once a patient presented fever and potential risk of COVID-19 exposure. Once the patients were diagnosed as suspected or confirmed with COVID-19, level 3 protection was implemented, especially during high-risk medical activities. Hand hygiene was strictly implemented all the time. Standardized daily behaviors of medical staff The maximum number of staff in the department was limited, and they were required to have enough rest for enhanced self-immunity. All the staff complied with the confinement of direct pathway between home and hospital to avoid unnecessary contact with persons with unknown conditions. All staff meetings were scheduled through Internet as E-meeting. The use of personal protection equipment should comply with hygienic regulations, and it should be replaced after use for each suspected patient. Eating and drinking were only allowed in the clean area. Furthermore, staff was allowed to rest in staggered time intervals. In case of unexpected meeting with each other, a distance of at least 1 m was observed. Hand hygiene was observed before examining patients and doing tests. After contact with patients, providing treatment, and touching any sample from patients, hand hygiene was strictly observed. All staff was required to wear a mask even during breaks. Sustained training by taking theory courses and doing drill practice First, training courses about COVID-19 knowledge, including clinical manifestation, diagnosis, and treatments; nosocomial infection prevention and control strategies, including personal and environmental hygiene maintenance; and wearing and removing personal protective equipment correctly, were provided to all the staff by online courses and video. An assessment quiz was conducted after training courses to make sure that all members know how to protect themselves. Second, an emergency drill was designed and practiced with small groups, as shown in Fig. 3 . A particular case was examined, and the medical team dealt with this situation as an actual emergency. By doing this drill, potential problems about personal protection and the implementation of a chart about dealing with a local COVID-19 outbreak were examined, and improvements and adjustments were realized according to the results of drill. Finally, all the staff were well trained before this epidemic. Training courses and brochures about COVID-19 and nosocomial infection prevention and control strategies were provided at the first day of admission. Local management rules were explained clearly with signature of confirmation. New patients admitted could have COVID-19 screening tests. No companions or visitors were permitted, except for extremely old and disabled patients with an absolute demand of assistance, and COVID-19 screening tests were required for companions before entering the department. Furthermore, once the patient developed fever or other suspected symptoms, COVID-19 screening tests were conducted again to make ensure that he or she was not infected or a potential infection source. Temperature was checked twice a day. The patients were asked to stay in their own wards with group meals Fig. 2 Workflow for outpatient consultation. Pre-check and triage were done first, and only patients without risk of COVID-19 could proceed with hematology consultation after double checking of temperature delivered to their door to reduce unnecessary contact. Patients and their companions were asked to respect personal hygiene instructions, such as wearing masks, 1 m of distance in contact, eating or drinking in staggered time, hand hygiene, and taking showers frequently. With the measures described above, including the workflow implemented in the outpatient clinic and inpatient department, and management of staff and patients, zero nosocomial infection of COVID-19 was recorded in the hematology department. More measures could be explored in preventing nosocomial infection not only in hematology but also in other departments, even in worldwide medical institutions. Once the staff fully understands the importance of nosocomial infection prevention and control, measures will be taken correctly in place. Adequate personal protection equipment is used in response to the risk level after infection risk evaluation [19, 20] . Standardized operation procedures and suggestions Fig. 3 Chart for the emergency drill of local COVID-19 outbreak in the hematology department about regulated daily activities and personal daily activities, such as eating and drinking, were specified in documents, and are implemented strictly for everyone. Inspection and supervision were reinforced by the nosocomial infection control office [21] . E-meetings or discussions about residual problems were scheduled to sustain improvement and adjustments. Furthermore, emergency drills were carried out regularly in cases of sudden outbreak in local service or in hospital. Generalization of the knowledge about COVID-19 and personal hygiene for all the population by distributing brochures One of the crucial methods to stop the propagation of COVID-19 epidemic is cutting the spread chain, particularly by controlling the infection source. Notably, some people infected with SARS-CoV-2 do not exhibit symptoms [22] . Hence, the general population should be educated about this epidemic and its corresponding solutions to protect themselves from COVID-19 [23] . The outpatient clinic and hematological inpatient department were localized in different buildings separately. Local layout was planned according to three zones and two passageways. Transit wards were constructed for patients during the 3-5 days of waiting for screening results before transferring to the hematology department. A temporary isolation ward was also prepared for patients who presented symptoms after availing hematology service. Different zones including the clean, buffer, and polluted area were organized clearly [16] . Sanitation workers were trained systematically in respecting all the procedures of nosocomial infection prevention and control. They understand how to use adequate disinfection methods for cleaning different materials, such as ground, wall, object surface, and medical facilities. They fully know how to deal with the different wastes generated in the service, such as medical wastes and normal domestic garbage. Good natural ventilation or air-sterilizing machines should be used [18] . Once the patients were discharged, complete terminal disinfection was done in their ward. Furthermore, a tracing system should be implemented with execution form recording the disinfection time with signature. All these activities should be supervised by nurses or doctors assigned to the nosocomial infection prevention and control team. All patients were recommended to consult with an online clinic first for a preliminary consultation with doctors. Then, doctors could evaluate whether they need to come to the hospital, or they just need to stay at home with prescription and regular follow-ups by teleconsultation. For patients who really need to come to the hospital for further examination or treatment, they could make an appointment with doctors at a fixed time point to avoid close contact with other patients. The registration could be done online, and the results can be seen in their mobile phone. The following admission appointment could be sent to the patients through messages or calls, thus avoiding prolonged stays in the outpatient clinic to wait for results or coming again to the hospital for obtaining the results. A robot could be installed in the outpatient clinic. It could perform pre-check and triage by checking the patient temperature and taking the rough investigation of their chief complaints and associated symptoms. Then, it could guide the patients to the specified clinic or site of examination according to the appointment made. It could assist hospitalized patients, especially patients in temporary isolation ward. It can deliver food and oral drugs to patients, take basic vital parameters, and help in integrating useful information in the patient document. The video surveillance system and intercom system allowed the close monitoring of patients and prompted communication with patients without close contact. These applications could be useful and accommodating, especially during this epidemic period [24] . In conclusion, considering all the measures described above, such as rational local organization, training courses and emergency drilling, standardized operation procedures or documents for medical activities, local disinfection and hygiene maintenance, and patient education brochures, the hematology department has maintained zero infection among the medical staff and no cross-infection among patients and their family members. Moreover, the absence of nosocomial infection could be maintained by observing all the suggestions related to sanitation security. Even after the epidemic, the regulations about nosocomial infection prevention and control should be observed. Moreover, the basic strategies of nosocomial infection prevention should be understood and practiced in daily medical activities to prevent nosocomial infection.
L-arginine, hereinafter referred to as arginine, is a semi-essential or conditionally essential amino acid, since it can be synthetized by healthy individuals but not by preterm infants [1] . From a chemical point of view, arginine is a 2-amino-5-guanidinopentanoic acid ( Figure 1 ). Its name derives from the Greek word ἄργυρoς (silver), indicating the color of arginine nitrate crystals. Arginine is involved in a number of biological processes, it is the substrate for a series of reactions leading to the synthesis of other amino acids, and it is a substrate for two enzymes, namely nitric oxide (NO) synthase (NOS) and arginase, which are fundamental for the generation of NO and urea, respectively. Arginine is known to act as a substrate for NO production by endothelial cells, thus regulating vascular tone and, overall, cardiovascular homeostasis [2] . NO is synthesized from arginine by the enzyme NOS in a reaction that involves the transfer of electrons from nicotinamide adenine dinucleotide phosphate (NADPH)-via the flavin adenine dinucleotide (FAD) and flavin mononucleotide (FMN) in the C-terminal reductase domain [3, 4] -to the heme in the N-terminal oxygenase domain, where the substrate arginine is oxidized to citrulline and NO [5, 6] , as shown in Figure 1 . Arginine is also implicated in T-cell proliferation and host immune responses, as well as in creatine and collagen synthesis [7] [8] [9] [10] [11] . There are three isoforms of NOS, two of which-endothelial (eNOS) [12, 13] and neuronal (nNOS) [14] [15] [16] -are constitutively expressed, while the third one, inducible NOS (iNOS) [17] [18] [19] , is expressed in response to cytokines and is related to the inflammatory response [6, 20] . NO generation In normal conditions, NOS catalyzes the transformation of arginine, O 2 , and NADPH-derived electrons to NO and citrulline ( Figure 1 ). However, in the presence of pathologic conditions like atherosclerosis and diabetes, the NOS function is altered, and the enzyme catalyzes the reduction of O 2 to superoxide (O 2 − ), a phenomenon that is generally referred to as "NOS uncoupling" [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] , and has been linked to a limited bioavailability of tetrahydrobiopterin (BH4, also known as sapropterin) [42] [43] [44] [45] [46] [47] . Indeed, the donation of an electron by BH4 to produce a transient BH4 •+ radical is required for the oxidation of arginine to citrulline and the associated formation of a ferrous iron-NO complex at the NOS heme catalytic center [48] [49] [50] [51] . BH4 is synthesized from guanosine triphosphate (GTP) by GTP cyclohydrolase I (GTPCH) and recycled from 7,8-dihydrobiopterin (BH2) by dihydrofolate reductase (Figure 2 ). Of note, NOS is inhibited by arginine analogs that are substituted at the guanidino nitrogen atom, like NG-monomethyl-arginine or NG-nitro-arginine [52-58]. As mentioned above, in the urea cycle arginine is converted by arginase, a manganese metalloenzyme, in ornithine and urea; this cycle is crucial not only for allowing urea excretion, but also for producing bicarbonate, which is critical for maintaining acid/base homeostasis [59-63]. Arginase exists in two distinct isoforms, arginase I and II, that share~60% sequence homology; arginase I is a cytosolic enzyme mainly localized in the liver, whereas arginase II is a mitochondrial enzyme with a wide distribution and is expressed in the kidney, prostate, gastrointestinal tract, and the vasculature [64-67]. The enzyme arginase is a key modulator of NO production by competing for arginine: in other words, NO generation is dependent on the relative expression and activities of arginase and NOS. More specifically, increased arginase activity may lead to a decreased bioavailability of arginine for NOS, thereby diminishing NO production. This mechanism has emerged as an essential factor underlying impaired endothelial functions [68, 69] . Specifically, an increased arginase activity has been associated with endothelial dysfunction in a number of experimental models of hypertension, atherosclerosis, diabetes, and aging . Indeed, endothelial dysfunction is a leading cause of several pathological conditions affecting the cardiovascular system, including hypertension, atherosclerosis, diabetes, and atherothrombosis [46, . Moreover, in April 2020, we were the first group to show that the systemic manifestations observed in coronavirus disease (COVID-19), caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), could be explained by endothelial dysfunction [120] . Indeed, alterations in endothelial function have been linked to hypertension, diabetes, thromboembolism, and kidney failure, all featured, to different extents, in COVID-19 patients [121] [122] [123] . Other investigators have later confirmed our view [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] . On these grounds, based on the positive effects of arginine on endothelial function, we can also speculate that arginine supplementation could be helpful, while not being harmful, for contrasting endothelial dysfunction in COVID-19 patients. An increasing interest in the potential therapeutic effects of arginine supplementation, especially in cardiovascular disorders, has recently emerged. An impaired NO synthesis is considered a main feature of a dysfunctional endothelium [107, [134] [135] [136] ; however, several studies suggest that arginine supplementation in healthy subjects does not lead to a significant increase in NO production [11, [137] [138] [139] [140] . For instance, the daily administration of arginine for 1 week did not affect the serum concentration of The major determinants of cardiovascular risk, including dyslipidemia, glucose intolerance, smoking, hypercholesterolemia, and aging, have a direct impact on the endothelium [179] [180] [181] . Exposing the vasculature to these conditions induces endothelial dysfunction and alterations as an early phenomenon, able to evolve and contribute to the progression towards clinically relevant disorders like hypertension, atherosclerosis, and diabetes mellitus. Hence, the endothelium plays a key role in cardiovascular physiology and pathophysiology [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] . Fervent research has been conducted in recent years in order to understand the underlying mechanisms and identify therapeutic strategies to prevent or counteract endothelial dysfunction. The ability of the endothelium to regulate vascular homeostasis is largely dependent on NO production, making endothelial vasodilator failure the main sign of endothelial dysfunction and a hot point to be targeted. The impaired endothelial NO availability in perturbed vasculature can be attributable to a diminished synthesis of NO or, indirectly, to an increased ROS production, which inactivates the NO source [195, 196] . In addition to counteracting oxidative stress, the stimulation of NO synthesis represents an alternative and a potentially effective approach [197, 198] , for instance, by providing further substrates to NO synthase. Theoretically, arginine supplementation meets these needs, and thus, it has been tested in many cardiovascular disorders as a potential therapeutic strategy [199] . However, human studies on arginine supplementation have often been a source of debate. Indeed, in healthy subjects as well as in patients suffering from cardiovascular disorders, levels of plasma arginine range from~45 to~100 µmol/L [137, [200] [201] [202] , significantly higher than the eNOS K m of 2.9 µmol/L [203] . Endocrine mechanisms may also contribute to vasodilation induced by arginine. Indeed, arginine stimulates the release of both insulin [204] [205] [206] and glucagon [207] from pancreatic islets of Langerhans. Interestingly, an intravenous infusion of arginine has been shown to induce vasodilation and insulin release in healthy humans, but when insulin secretion was blocked by octreotide co-infusion, no vasodilation occurred, whereas vasodilation was restored by insulin co-administration [208] . Since high intravenous doses of arginine (30 g) have also been shown to induce growth hormones (GHs), and secretion [209] , the vasodilation induced by arginine could also be mediated by GHs via a signaling pathway that includes insulin-like growth factor-1 [210, 211] . Substantial data indicate that endothelial dysfunction is highly prevalent in elderly individuals [212, 213] . Endothelial dysfunction has also been implicated in age-associated declines in cognitive function, physical function, as well as in the pathogenesis of stroke, erectile dysfunction, and renal dysfunction. Clinical trials testing the effects of arginine in aging-induced endothelial dysfunction have yielded controversial results. An acute intravenous infusion of arginine (1 g/min for 30 min) had no effect on endothelial-dependent vasodilation in healthy older individuals [214] . Similarly, the intravenous infusion of arginine induced a significant increase in the renal plasma flow, glomerular filtration rate, natriuresis, and kaliuresis, in young but not in aged hypertensives [215] . Another study conducted in healthy postmenopausal women taking 9 g of arginine per day for 1 month confirmed that plasma arginine increased without a concomitant significant change in flow-mediated dilation [216] . On the contrary, in a prospective, double-blind, randomized crossover trial in 12 healthy, old participants (age 73.8 ± 2.7 years), chronic arginine supplementation (16 g/day for 2 weeks) markedly increased their plasma levels of arginine (114.9 ± 11.6 vs. 57.4 ± 5.0 mM) and significantly improved endothelial-dependent vasodilation [217] . The majority of studies in animal models supports a beneficial effect of arginine supplementation in hypertension, especially in the presence of salt-sensitive hypertension. For instance, both oral [218] [219] [220] and intraperitoneal [221, 222] arginine administration in Dahl salt-sensitive (DSS) rats was shown to prevent the increase in blood pressure induced by a high salt diet. However, arginine was not effective in DSS pretreated with high salt for three weeks [218] , suggesting that arginine is able to prevent and counteract hypertension when it is in the early stages, but probably not when some changes and pathological remodeling have already occurred. The outcome of arginine supplementation could also depend on the method of administration. For instance, renal medullary interstitial infusion of arginine prevents the increase in blood pressure in high salt-treated rats, while the intravenous dose necessary to obtain a similar increase in plasma arginine does not affect blood pressure [223] . A rat model of type 1 diabetes mellitus shows an important reduction in blood pressure after 4 weeks of oral arginine treatment [224] ; oral arginine administration prevents fructose-induced hypertension [225] . Oral arginine administration does not correct hypertension in spontaneously hypertensive rats, although markedly reduces renal damage [226] . Although the beneficial effect of arginine supplementation in hypertension appears to be largely attributable to its impact on NO synthesis, arginine has also been shown to have antioxidant properties, thus affecting the activity of redox-sensitive proteins and lowering blood pressure [227] [228] [229] [230] [231] [232] [233] [234] . Indeed, supplementation with 3 g/day arginine for two months increases the serum total antioxidant capacity in obese patients with prediabetes [235] ; of note, in vitro experiments performed in endothelial cells have revealed that arginine reduces superoxide release and the cell-mediated breakdown of NO [236] . In the clinical scenario, the oral administration of arginine acutely improves endothelium-dependent, flow-mediated dilatation of the brachial artery in patients with essential hypertension [237] ; however, the long-term effects of arginine were not investigated in this study [237] . In a Japanese population, the acute intravenous infusion of arginine (500 mg/kg for 30 min) is able to decrease arterial pressure of both salt-sensitive and salt-insensitive patients [238] . In a similar study, conducted on African-Americans, the same amount of arginine administration reduces blood pressure with a greater effect in the salt-sensitive population [239] . Interestingly, in hypertensive patients in which the control of blood pressure with angiotensin converting enzyme (ACE)-inhibitors and diuretics for three months was unsuccessful, the addition of oral arginine (6 g/day) was effective in reducing both systolic and diastolic blood pressure levels [240] . Unfortunately, many of the findings on the effects of arginine supplementation in hypertension derive from small clinical studies and, despite the promising efficacy, further investigations are needed, especially large, randomized, and controlled trials. The ability to modulate the renin-angiotensin-aldosterone system (RAAS) is another mechanism by which arginine can regulate blood pressure: specifically, arginine inhibits ACE activity, reducing angiotensin II production and its effects on vascular tone [241] . Alongside the preservation of endothelial-dependent vasodilation, the enhanced bioavailability of NO reduces the activation of pro-inflammatory genes and the expression of endothelial adhesion molecules [242] . These events strongly regulate the development and the fate of atherosclerosis [243] [244] [245] . For these reasons, it is not surprising that arginine has a powerful effect on atherogenesis and its evolution. In particular, preclinical investigations have shown that chronic arginine administration in LDL-receptor KO mice significantly reduces the extension of atherosclerotic plaques [246] . Similarly, arginine supplementation in humans reverses the increased monocyte-endothelial adhesion, mirrored by a normalization of platelet aggregation [247] . These effects make arginine a promising drug for disorders like coronary artery disease (CAD), heart failure, and peripheral artery disease (PAD). In 1997, two important studies investigating the effects of arginine in CAD were published [248, 249] . In a placebo-controlled study, Adams and collaborators showed that oral administration of arginine (21 g/day for 3 days) significantly improved the vasodilatory response of the brachial artery in premature CAD [248] . A double-blind placebo-controlled study conducted on 22 patients with stable angina pectoris revealed that the administration of arginine was able to improve their exercise capacity in just 3 days [249] . The following year, a clinical study confirmed the beneficial effects of long-term arginine supplementation (9 g for 6 months), showing significantly enhanced vascular responses to acetylcholine in patients with coronary atherosclerosis [250] . Preclinical studies were consistent with these findings. For instance, oral administration of arginine reduced the intimal hyperplasia in balloon-injured carotid arteries in spontaneously hypertensive rats [251] . This first encouraging evidence prompted further investigations about arginine's effects on CAD. Again, arginine treatment for 4 weeks preserved endothelial function in CAD patients, markedly reducing LDL oxidation [252] . Another study highlighted the method of administration as a major determinant of the efficacy of high dose arginine supplementation: intra-arterial infusion, but not oral administration, was able to improve endothelial-dependent vasodilation in patients with stable angina pectoris [253] . The therapeutic potential of arginine has been also investigated in heart failure [254] [255] [256] [257] [258] and ischemia-reperfusion injury [259] [260] [261] , often yielding controversial results. Endothelium-dependent vasodilation in response to acetylcholine and ischemic vasodilation during reactive hyperemia is attenuated in the forearm of patients with heart failure [262] . In a seminal paper, Hirooka and collaborators demonstrated that the intra-arterial infusion of arginine was effective in reversing the blunted endothelium-dependent vasodilation observed in heart failure [263] . Moreover, oral arginine supplementation (6 g twice a day for 6 weeks) enhanced endurance exercise tolerance in heart failure patients, an important determinant of daily-life activity in patients with chronic stable heart failure [264] . In line with these results, a clinical study carried out in 21 patients with class II/III heart failure (New York Heart Association, NYHA) established that improved endothelial function following exercise training is associated with increased arginine transport [265] . However, another investigation in 20 patients with NYHA class III/IV heart failure demonstrated that responses to acetylcholine and sodium nitroprusside determined using forearm plethysmography were not affected by arginine (20 g/day every day for 28 days), although the actual levels of arginine in the blood were not measured [266] . Exogenous arginine (3 g three times a day for 6 months) administered to patients after an acute myocardial infarction did not improve vascular stiffness measurements or ejection fractions; this clinical trial had to be interrupted due to excess mortality in the treated patients [267] . The improvement in peripheral circulation is critical in patients with PAD, as in severe cases the extensive damage of leg tissues can result in gangrene and amputation [268] [269] [270] . Intravenous arginine administration to PAD patients is able to increase the calf blood flow and walking distance [271] . Similarly, an acute intravenous arginine infusion (30 g in 60 min) improves NO production and blood flow of the femoral artery in PAD patients [272] . The oral consumption of arginine for 2 weeks is able to increase the pain-free walking distance, improving the quality of life of patients with hypercholesterolemia [273] . Nevertheless, if the short-term arginine administration seems to be effective in treating PAD, the results on long-term administration are less consistent. A randomized clinical trial testing the long-term (6 months) effects of arginine supplementation was conducted on 133 subjects. Despite an increase in plasma levels of arginine, the study revealed no significant effect of arginine treatment on NO-dependent vasodilation, as well as on the relative functional phenotype of PAD patients [274] . Given the fundamental pathogenic role of endothelial dysfunction in diabetes and its complications [275, 276] , the therapeutic use of arginine supplementation has been tested. In addition to the direct impact of arginine on endothelial vasodilator capacity, a crosstalk with the insulin pathway has been suggested [150, 277] . In particular, as mentioned above, arginine can induce the release of insulin from pancreatic beta cells [204] [205] [206] . On the other hand, insulin is able to reduce ADMA concentrations [278] and to stimulate the secretion of arginine [279, 280] . The stimulation of insulin receptors induces NO release, producing an insulin-dependent vasodilation [281] [282] [283] [284] [285] . Of note, such a protective effect of insulin on arginine mobility and endothelial NO production is compromised in diabetes [286] . Henceforth, diabetic patients could be an optimal target population for arginine supplementation. Preclinical studies corroborate this theory: in diabetic rats, the oral administration of arginine reverses endothelial dysfunction [287] , restoring endothelium-dependent relaxation and decreasing oxidative stress [224] . Arginine administration in tap water (free base, 50 mg/kg/day) for 4 months has been shown to reduce both cardiac [288] and renal [289] fibrosis in db/db mice, by the interaction of arginine with reactive carbonyl residues of glycosylation adducts of collagen, thereby inhibiting glucose-mediated abnormal cross-linking of collagenous structures. These results were later confirmed in a clinical setting, showing that 2 g of arginine free base administered orally as two daily doses of 1 g each reduced the lipid peroxidation product malondialdehyde in diabetic patients [290] . Clinical studies confirmed the reduction in blood pressure, platelet aggregation, and hemodynamic function in diabetic patients treated with intravenous arginine [291] . While in healthy subjects arginine treatment does not seem to affect insulin receptor sensitivity or density [292] , in conditions of insulin resistance, arginine improves insulin sensitivity; indeed, the intravenous injection of arginine in obese or type 2 diabetic patients stimulates insulin responsiveness, restoring insulin-dependent vasodilation [151, 293] . Similarly, the oral administration of arginine improves hepatic and peripheral insulin sensitivity in a cGMP dependent fashion [294] . A prospective, crossover clinical trial conducted in mildly hypertensive type 2 diabetic patients revealed a significant decrease in blood pressure in response to arginine, occurring two hours after the oral administration; the effect of lowering blood pressure was associated with increased plasma levels of citrulline, whereas no significant changes in insulin levels were detected, suggesting that the observed phenotype was dependent on arginine-induced NO synthesis [295] . Overall, the mentioned studies substantiate the use of arginine in the diabetic population, at least as a prophylactic treatment able to prevent cardiovascular complications of diabetes. One potential limitation for the use of arginine is the risk of reaction with precursors of advanced glycosylated products [296] , which are particularly abundant in diabetes. Since the addition of methylglyoxal (abundant in diabetic patients [297] ) to arginine has been shown in vitro to produce potent superoxide radicals in a dose-dependent manner [298] , arginine supplementation has been suggested to be combined with antioxidants. A double-blind study on 24 diabetic patients verified this assumption evaluating the combination of N-acetylcysteine and arginine oral treatments: the combined treatment was able to reduce systolic and diastolic blood pressure, total cholesterol, C-reactive proteins, vascular adhesion molecules, and improved the intima-media thickness during endothelial post-ischemic vasodilation [299] . This last evidence indicates that the combination of arginine with an antioxidant agent should be potentially effective and well-tolerated. Overall, data available in the literature support and encourage the use of arginine supplementation in cardiovascular disorders, especially in preventing the evolution of hypertension and atherosclerosis. One limitation of using arginine supplementation remains the selection of the optimal target population. In this sense, we believe that ADMA levels could be very useful in selecting the target population, and patients with increased ADMA/arginine ratios are probably the most suitable population, in which arginine supplementation can actually be effective. Another limitation about arginine use concerns its dose. Indeed, available studies suggest a number of different doses, sometimes effective, sometimes not. For instance, the acute oral administration of arginine (9 g/day) has been shown to be not successful in inducing an effective NO production [216] . Instead, chronic administration of oral arginine (e.g., vials containing arginine salts-free 1.66 g/20 mL), has been shown to favor the utilization of arginine for NO synthesis [300] , and we have data showing that oral arginine (3 g/day of Bioarginina ® , Farmaceutici Damor, 2 vials/day) improves endothelial function in hypertensive patients via the regulation of non-coding RNAs (Gambardella et al., personal communication) . Large, prospective randomized clinical trials are needed to better define the target population for arginine supplementation, alongside with correct dosage definitions. To date, a dose of~3 g/day of arginine (e.g., Bioarginina ® , 2 vials/day) seems to be effective in favoring the utilization of arginine for NO synthesis, without toxic effects.
The COVID-19 pandemic has had an unprecedented impact on surgical residents and fellows. Rotation and didactic schedules have been modified, trainees have been redeployed, and innovative use of technology has been explored for patient care and trainee education. 1 Though an applicant's financial expenditures vary depending on the individual candidate, fellowship program and even geographical locations of both, 9 there are published surveys that help guide an estimate of the economic burden of surgical fellowship interviewing ( Table 1) . As only two studies provide information about the monetary cost of fellowship interviews, additional reviews were performed of articles focusing on residency interview costs. [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] Although limited by their retrospective nature and usual survey response rate biases, the aggregated information demonstrates the expensive nature of fellowship interviews. An interviewee would have spent approximately $6,000 for live interviews applying for surgical fellowships in 2020, adjusting for inflation. Over fifteen hundred residents from surgical specialties applied for fellowships in 2019. If there are a comparable number of fellowship applicants in 2020, the cumulative saved cost of not having live interviews would be $9 million ($6,000 x 1,500 applicants), though that total needs some correction as there are fellowship applicants who interviewed in person before the implementation of social distancing measures. A prior survey of General Surgery Program Directors also revealed the substantial program costs of in-person recruitment. 23 There are around six hundred General Surgery subspecialty fellowships nationwide. Mean hard costs, not including personnel effort, is approximately $8,400 as adjusted for inflation. The cumulative financial savings for institutions presuming they held interviews after the start of social distancing is over $5 million ($8,400 x 600 programs) and more when accounting for non-General Surgery subspecialty fellowship programs. found that in addition to technical difficulties, applicants expressed concerns about the limited number of faculty interviewers and inability to view the hospital; they also missed observing how surgical faculty and trainees interact. 26 Healy et al. reported that while in the minority, 15% of candidates who participated in videoconference interviews for adult reconstruction fellowship at Newton-Wellesley Hospital's Kaplan Joint Center between 2015 and 2017 did not feel they presented themselves to their satisfaction, 19% were not comfortable ranking the program, and 34% stated that videoconference interviews had an unfavorable impact on their ranking of the program. 24 These studies demonstrate that despite the reduction in financial spending, there are less quantifiable costs to applicants' abilities to convey themselves and assess a program with virtual recruitment. With COVID-19, virtual interviews have been a necessary adjustment for fellowship training programs in match seasons, and they may be the future of residency and fellowship recruitment. With the Association of American Medical Colleges reporting median graduating student debt of $200,000 in 2019, virtual interviews advantageously reduce additive costs to preexisting economic hardship. 27 They minimize surgery resident absences from their training programs. Virtual interviews may be utilized in a two-tiered screening program to narrow applicant pools and host candidates who have greater interest in the program. [24] [25] [26] Nevertheless, there are concerns about the equivalency of virtual interviews to live recruitment. Thus, recommendations for improvement opportunities:  Surgery programs should work closely with faculty and staff to become more facile with videoconferencing to diminish loss in productivity secondary to unfamiliarity or discomfort with technology. 25 They can use web streaming or pre-produced videos to highlight traditionally unique "live" aspects of interview days like the campus and city tour. 24 Websites can be supplemented with narrated slide decks and podcasts to save on hard copy or emailed reproductions of program materials usually prepared for applicants.  Further centralization of fellowship virtual interviews using uniform platforms may streamline processes for candidates and programs.  Fellowship candidates and faculty members should prepare for virtual interviews as they would for live interviews to maximize the use of time. 28, 29 Fellowship programs should help accommodate applicants whose home environments may not be appropriate for professional interviews or who may need modifications of their interview days due to clinical responsibilities during the COVID-19 pandemic and recovery. 30  Ongoing applicant and program surveys are needed to learn more about the cost benefits and disadvantages of the widespread use of virtual interviews. As the global impact of the COVID-19 public health crisis continues to evolve, so must thinking about all aspects of surgical education, including how selection is conducted. The unintended effects on costs make this an opportune time to innovate and rethink how to recruit prospective candidates during the current and forthcoming interview seasons.
Dear Editor, At the end of February 2020, the first cases of COVID-19 infection, which was present in China since December 2019, were reported in Italy. In one month, the infection spread across the country, even if the highest incidence was observed in Northern Italy. To avoid the spread of the infection, emergency measures were adopted, and a lockdown was declared since the beginning of March 2020 [1] . These measures brought to a hospital reorganization and to cancel non-urgent outpatient visits. Consequently, most visits of patients with atrophic gastritis (AG) were postponed. AG is a chronic condition at increased risk for gastric neoplasms, often affecting elderly people [2] [3] [4] . Telemedicine may be a valid alternative for an outpatient examination of patients with chronic diseases and could be used for gastrointestinal diseases [5] [6] [7] . To date, no data about the use of telemedicine in AG patients are available. Being the clinical interview the mainstay of face-to-face visits in AG patients, this approach may be suitable for a remote surrogate of medical care in these patients. We aimed to remotely investigate the impact of the SARS-CoV-2 pandemic on AG patients and to assess the presence of GI symptoms using telemedicine as a tool of supportive care, in a cross-sectional study conducted in a teaching hospital in a low-risk region in Central Italy. The study population derived from a prospective cohort of AG patients diagnosed between 1992-2016, included in a surveillance program for gastric neoplasms, to whom standard face-to-face visits are offered annually to assess general well-being and blood tests for anemia/micronutrient deficiencies and to discuss endoscopic-histological findings of the last surveillance gastroscopy [8] . ( Figure 1A ). During telephone interviews, the personal risk perception for SARS-CoV-2 infection, the presence of infection-related symptoms (fever >37.5°C, asthenia, anosmia/ageusia, and/or diarrhea for at least 3 days, cough for at least 7 days), and the risk of infection exposure (recent travels to Northern Italy or China, exposure to SARS-CoV-2-infected subjects, and recent contact with hospitals, nursing homes, or healthcare workers) were addressed. GI symptoms were assessed using a standardized questionnaire to investigate the presence, severity, and frequency of new (onset during the SARS-CoV-2 pandemic) or already present (persisting since the last face-to-face visit and already treated) GI symptoms. When available and/or possible, the findings of recent blood tests (< 2 months) and endoscopy-histology charts were remotely assessed and discussed by inviting the patient to read them by phone. Based on these remote clinical assessments, the need for an urgent face-to-face outpatient visit or other medical intervention was established ( Figure 1B) . Of the 218 eligible patients, 65 (29.8%) did not answer the telephone, one (0.5%) died for lung tumor, and one (0.5%) was excluded for a recent diagnosis of pancreatic tumor. Overall, 151 (69.2%) patients adhered to the telemedicine interview and were included: 72.2% were females, the Overall, slightly more than half of the patients presented GI symptoms (new or persisting) and dyspepsia was the most common symptom in our interviews, as it is known to be associated with AG. But new GI symptoms occurred in a low proportion (11%) of patients, and, owing to the SARS-CoV-2 restrictions, patients self-treated themselves or asked the GPs, because of the impossibility to book an outpatient visit in the hospital. However, the potential relationship of these symptoms to the pandemic is unlikely as 80% of the interviewed patients perceived a low or nonexistent pandemic-related risk. Telemedicine was well-accepted by AG patients. It seems a promising tool when face-to-face visits are not possible. Several differences exist between telephone interviews during the SARS-CoV-2 infection and face-to-face visits and deserve some considerations about the advantages and pitfalls of telehealth. Firstly, telemedicine does not permit to visit the patient. This could be a great limit in some diseases, but generally, in AG, physical examination does not play an important role because it is not helpful to recognize the most frequent symptoms/signs of AG like dyspepsia or anemia. Secondly, due to the lockdown, only a few patents did perform blood tests. This represents a problem because it could lead to overlook the new onset of anemia and/or micronutrient deficiency needing timely treatment to avoid serious complications especially in the elderly [9, 10] . In the current health emergency, we tried to overcome this point by remotely checking for alert symptoms of anemia or iron/cobalamin deficiency. However, this pitfall was closely linked to the lockdown, as in other periods patients can easily perform blood tests and send them by e-mail or other electronic tools or more simply read them on the phone to the interviewing physicians. This would allow to decide whether a face-to-face visit to prescribe a specific treatment or an endoscopic investigation to rule out neoplasia needs to be scheduled. Therefore, telemedicine could be considered for chronic diseases like AG also beyond the current pandemic and could become a model of medical care for selected patients with serious mobility problems or patients who are unable to reach the hospital for a face-to-face visit. In this study only one patient was tested for SARS-CoV-2 infection, therefore we cannot confirm or exclude that patients with suspicious symptoms contracted the infection. Further follow-up is not yet available and the correct assessment of our telehealth visits cannot yet be verified; only the outpatient visit and/or an elective endoscopy will rule out that AG complications such as anemia or neoplastic lesions have not been missed. In conclusion, telemedicine was well accepted by AG patients and is suitable to remotely evaluate the impact of the SARS-CoV-2-infection, and to rule out red flag symptoms for AG complications. This innovative approach may be viewed as a precious tool to offer medical care to AG patients during health emergencies, such as the COVID-19 pandemic, requiring social distancing and lockdown. In selected AG patients with mobility problems, it can also be proposed, after the end of the pandemic, in other circumstances making face-to-face outpatients visits infeasible or difficult.
All authors declare no conflict of interest Phase I shows the triage for presence or absence of COVID-19 based on exposure and symptoms. Phase II shows patient with COVID-19 exposure/symptom will be classified as no covid, suspect, probable and confirm 3 , while those without exposure/symptoms will proceed to the required treatment modality and be classified as new, ongoing and follow up. Phase III shows the treatment management process for those with confirmed COVID-19, and those who will proceed with chemotherapy or radiation therapy. Phase IV includes the disposition plan after intervention for confirmed COVID-19, and resumption of regular schedule for chemotherapy or radiation therapy for those without COVID-19 (Figure 1 ). Everyone will be screened at the triage area and will fill up a COVID-19 screening information record (Figure 2 ) based on the department of health (DOH) decision tool 3 . Patient with respiratory symptoms, with history of close contact to COVID-19 positive patients, and with travel history to areas with high incidence of COVID-19 were not allowed to enter the cancer center. Evaluation will be done by the infectious control officer at the emergency room 4 . Only staff on duty, patients, and one relative were allowed inside the cancer center premises, and a physical distance of at least 1 meter apart is observed 5 . No mask, no entry policy was implemented. Patients were instructed to bring their own alcohol or hand sanitizers, and blanket during treatment. Sanitation of treatment couch, and bed was done after every patient using proven sanitizers 6 . Patient prioritization protocol for planning, and start of radiation therapy was based on the following case category 7-10 : 1. Urgentsuperior vena cava syndrome, cord compression, pain, bleeding, life threatening symptoms, brain metastases, and patients coming from remote places were scheduled on the same day or the day after consultation 2. Semi urgentcolorectal, concurrent protocol, head and neck, cervical, and lung were scheduled two to seven days after consultation 3. Not urgentbreast, prostate, endometrial, post-operative, skin, and asymptomatic brain were scheduled eight to ten days after consultation Patient prioritization for chemotherapy was based on tiered approach of the European Society of Medical Oncology categorized as high, medium and low priority 11 . Walk-in and new patients were scheduled during the clinic hours of the attending oncologist. Physical check-up of patient was limited, and teleconsultation was encourage using the platform https://doxy.me 24,25 . Patients classified as suspected/probable/confirmed COVID-19 will follow the infectious control committee protocol based on DOH-Philippine Society of Molecular and Infectious Diseases guidelines 4 . Oncologist and referring physician will consider the risk for both patient and staff and decide the need to pursue chemotherapy or radiation therapy 5, 8, 14 . IgM and IgG Rapid Diagnostic Test (RDT) was used as an initial test for COVID-19 because real time reverse transcription-polymerase chain reaction test (rRT-PCR) which is the gold standard for confirmation of COVID-19 is not readily available and accessible. Patient with negative RDT results but with significant symptoms must be re-tested using rRT-PCR 4 . A dedicated nurse on full PPE will handle confirmed infected patients for chemotherapy in an isolation room with a separate entrance and exit. No relatives nor visitors will be allowed. In case radiation therapy maybe life-saving, COVID-19 patients will be the last one to be treated and/or on a separate machine if available. Otherwise, priority in delaying or stopping radiation therapy is considered. Clinical staff, cashier, and coordinators reports on a skeletal force while administrative, marketing, human resources, and finance will work from home as part of the modified working shifts. Limited supplies of PPE were outsourced and the rest comes from donations. Ultraviolet light was used to sanitize PPE and washing was done so they can be re-use. The prescribed suit specifications were: Chemotherapy and brachytherapy were transformed into sterile area. Transparent acrylic or plastic shield for the triage, reception, cashier and nursing area were constructed for protection between personnel and patient. This collaborative cancer management strategic action plan and workflow attempts to answer the uncertainties of this pandemic despite faced with meager resources. It may guide cancer centers from developing countries on how to adapt during these current adversities. We recommend adopting the following in the contemporary normal period:
SARS and MERS epidemic. Asian Pac J Allergy Immunol. 2020;38:1-9. 11. Ruf BR, Knuf M. The burden of seasonal and pandemic influenza in infants and children. Does a Crying Child Enhance the Risk for COVID-19 Transmission? [6] . Extrapolating the same logic even a crying and screaming child should produce aerosol super-emission. Although an operational definition for AGP is in place, the relation to crying and its possible effects of increased aerosol generation has so far not been stressed. In a pandemic situation, we need to ponder on some points: even infants and toddlers who come for routine vaccinations or non-respiratory complaints can be asymptomatic carriers or in pre-symptomatic period of transmission; implementing source control measures like face mask and social distancing in this age group practically difficult; crying, a common occurrence in this age group, also increases the risk of aerosol generation and transmission; and, proximity of these kids to caregivers and their attenders along with sustained crying either due to anxiety or fear might further increase the risk and load of aerosol. In view of the yet unknown increased risks posed by expected or unexpected crying of asymptomatic children in the transmission of COVID-19, it may be prudent to make every effort to avoid examining a crying child without adequate precautions. The pandemic of coronavirus disease (COVID-19) has led all of us to recalibrate both our personal and professional life [1] . In our routine pediatric outpatient practice for non-COVID cases i.e. well baby visits and kids presenting with afebrile, non-respiratory symptoms, a surgical face mask with proper hand hygiene and gloves has been recommended for health care professionals [2] . However, for those handling aerosol-generating procedures (AGP), respirators and additional personal protection equipment (PPE) are recommended [3] . Aerosol is defined as suspension of fine solid particles or liquid droplets in air or another gas. Aerosols of varying severity are generated on sneezing, coughing, talking and also during normal breathing [4] . AGPs are believed to produce aerosols and droplets as source of respiratory pathogens that exposes the health care workers to pathogens causing acute respiratory infections including Severe acute respiratory syndrome Coronavirus 2 (SARS-CoV-2) [5] . AGPs are generated on performing certain medical procedures like intubation, manual ventilation, non-invasive ventilation, tracheostomy insertion etc. on infected cases. However, it is not clear if the risk is due to direct airborne transmission or secondary exposure to respiratory droplets. It is established that even loud speaking results in increased aerosol generation i.e. aerosol super-emission symptoms be treated, irrespective of their comorbidity? Why do pediatricians appear to be unwilling to consider employing the COVID-19 treatments that are available, e.g., hydroxychloroquine and azithromycin [4] ? These drugs (which are already widely used in pediatrics in other indications) certainly have side effects that are of concern, but their use in a hospital environment shall allow these side effects to be monitored and ensure greater safety for the patient [5] . In the absence of specific antiviral treatments, pediatricians need more virological, epidemiological, and clinical data to better treat and manage COVID-19 infections. It should be kept in mind that children, even when asymptomatic, may be a potential cause of spread and transmission of the disease in their communities [6] . In light of this, barrier precaution needs to be rigorously applied within families in order to protect the elderly. Acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection is uncommon in children [1] , with greater morbidity and mortality in adults and elderly. A number of hypotheses may explain the low susceptibility of children to COVID-19 virus [2] viz, (i) immaturity and limited function of angiotensin-converting enzyme 2 (ACE2) receptors in children, as undifferentiated cells that express low levels of ACE2 are not readily infected by SARS-CoV; (ii) the immature innate immune system in young children results in less inflammation and consequently fewer symptoms; and, (iii) possible crossreactivity of antibodies against other viruses (influenza, adenovirus, respiratory syncytial virus etc.) with the SARS-CoV-2, which could provide partial protection. As COVID-19 infection is not universally mild in children [3] , it is important that they are protected as a vulnerable population, as still there is limited data on the risk factors for severe infection in children. The long-term effects on the lungs of COVID-19 in children are not known, even for those with moderate symptoms. In patients hospitalized in French pediatric units in recent weeks, the chest computed tomography (CT) scans have often been pathological, even in children with limited respiratory sign with associated decline in lung function (unpublished data). In light of this, should not all children with moderate to severe respiratory
The COVID-19 pandemic has adversely affected healthcare delivery systems throughout the world [1] . While the impact of this ongoing pandemic is largely asymmetric, with relatively developed countries bearing the largest burden of the virus, developing economies with less robust infrastructure are still bracing for the peak of the pandemic. Healthcare in general and cancer care particularly, are still evolving to adapt to this unprecedented challenge. Also, this crisis has caused the entire community to reconsider and revisit potential strategies for maintaining the safety of health caregivers, patients, and establishments. Cancer management poses a unique challenge in terms of a long duration of treatment, a need for regular monitoring, acute and delayed morbidities associated with aggressive therapies, etc., resulting in an enhanced risk of contracting infections [2] . Nevertheless, cancer treatment has been declared as an essential service that cannot be compromised during the pandemic [3, 4] . Numerous guidelines and recommendations have been published by various agencies regarding the management of malignancies under these unusual circumstances [5, 6] . The management of locally advanced cervical cancer presents specific challenges in this context. It is a rapidly proliferating cancer with high cure rates; therefore, postponement of treatment is an unviable option [7, 8] . External beam chemo-radiotherapy, which is the standard treatment, is usually delivered in conventional fractions of 1.8 Gy to 2 Gy, over 4.5 to 5 weeks. Unlike other malignancies (e.g., prostate or lung), hypofractionated regimes to reduce the treatment time have not been proven to be efficacious in cervical cancer. Brachytherapy for curative management of cervical cancer is indispensable, usually follows chemo-radiotherapy, and accounts for 50% of the radiation treatment [9] . Brachytherapy is delivered in multiple applications and fractions, with each application requiring placement of intracavitary/interstitial applicators under anesthesia, followed by imaging, planning, and treatment delivery. Therefore, brachytherapy may portend additional risks to the patient as well as the health caregivers in the time of pandemic. Hence, all processes related to brachytherapy, such as application, imaging, planning, and delivery described in various guidelines need to be adapted to a pandemic environment [10] . Before brachytherapy, along with routine pre-procedure work-up, the COVID-19 testing should be considered before each application. For patients who are hospitalized for the entire duration of brachytherapy, single testing before the first application may be considered sufficient, if appropriate precautions are assured during their hospital stay. This has the potential to reduce highrisk exposure to health caregivers, especially since early studies have identified the presence of coronavirus in urine and anal swabs in asymptomatic individuals apart from droplet borne spread [8, 11] . However, the accuracy of testing modality used should be taken into account. For patients who were admitted and tested negative for COVID-19 during external beam radiotherapy, the extension of hospital admission to complete brachytherapy in a relatively safe environment may be considered to limit the need for repeated testing. However, in areas with a high incidence of COVID-19, tests may be repeated before each application, with an assumption that a negative test is 'valid' for 3 to 5 days. Figure 1 shows schema and possible workflow for cervical cancer patients and COVID testing for brachytherapy. Patients who test negative for COVID-19 should be treated with universal precautions, to reduce potential high-risk exposures to patients and the staff. For those who have suspicious symptoms, or for those with equivocal results on initial testing, deferring brachytherapy, isolation, and prompt (re) testing depending on institutional policies to confirm COVID-19 status may be a reasonable option before the further course of action is decided. For patients who test positive, symptomatic or asymptomatic, the management strategy is more complicated and involves critical decisions for treatment of COVID-19, the continuation of treatment with brachytherapy, protection of health caregivers involved in brachytherapy processes and delivery, and other routine processes. Though evidence-based recommendations for the management of COVID-19 are available in the current international guidelines, questions regarding the management of cancer in patients who test positive before or during their cancer treatment remain mostly unanswered [12, 13, 14, 15] . This mainly applies to the delivery of radiation therapy in cervical cancer, where overall treatment time is an important prognostic factor [10, 16] . Some small, retrospective series have doc-umented increased peri-operative mortality and morbidity in COVID-19 positive patients, who underwent major elective and emergency surgeries [14, 15, 17] . However, data on minor procedures including brachytherapy, is lacking at present. While the risks of continuing brachytherapy surely outweigh the benefits in symptomatic positive patients, strategies for treating asymptomatic positive patients need to be resource-customized. Employing a committed team, with a full gear of protective equipment, dedicated operating room and treatment machine, disinfection protocols for brachytherapy equipment, etc., may not be a feasible option for most centers [18] . In such circumstances, it is not unreasonable to wait for the recovery of patient and a negative COVID-19 result. Once a negative result is documented, re-testing may be done after a week (or as recommended by the local public health authorities) and brachytherapy can be considered accordingly. However, the number of applications ought to be limited and delivering multiple fractions per each application should be considered in such cases to compensate the unnecessary prolongation in overall treatment time. Maintaining a fine balance between the potential risks associated with such attempts and the risks related to undue prolongation of overall treatment time is imperative for achieving optimal outcomes. In principle, cancer care units should be well-prepared to confront such situations, with appropriately equipped ICU facilities and advanced precautions and measures. Pre-operative parts preparation, medications, and consenting for brachytherapy procedures should include disclaimers related to the risk of COVID-19 infection. Pre-operative admissions may be avoided to reduce the risk of acquiring infection from asymptomatic patients in general wards; alternatively, admission in isolated rooms may be considered. During the brachytherapy procedure, universal precautions with a personal protective equipment and appropriate disinfection practices should be absolutely adopted. Similarly, the number of health caregivers needed for the procedure should be restricted to the bare minimum [8] . Wherever applicable, the staff should be divided into groups, so that ongoing treatments can be sustained even if some of the members are exposed or infected. Regional anesthesia should be preferred wherever feasible, to avoid aerosol-generating procedures associated with general anesthesia. However, achieving adequate perineal and pelvic muscle relaxation for vaginal packing assume even greater importance, to stabilize the application for longer durations and reduce the doses to organs at risk, especially if larger dose per fraction or multiple fractions per application are planned [16, 19] . Repetitive imaging for treatment planning may pose additional risks to patients and health caregivers. The use of portable imaging in the brachytherapy treatment area to restrict multiple patient transfers should be implemented. If resources for imaging are limited, 2-dimensional planning can be considered, with point A-based prescriptions. For facilities practicing volumetric planning, the use of MRI planning can be restricted to the first application, mainly if the volume of residual disease is low and extensive reduction in further fractions is not anticipated. Alternatively, CT and trans-rectal ultrasound can also be used for volumetric planning in experienced facilities [10] . In centers where resources for planning are severely restricted, library plans available with the treatment planning system can be used based on applicator parameters, if the geometry is well-maintained and reproducible. Delivery of multiple fractions per application, ranging from 2-5, has the potential to limit high-risk exposures to health caregivers and patients. However, such attempts should be based on sound and robust biological rationale, and supported by clinical evidence, so that appropriate balance can be achieved between high-risk exposures and disease/toxicity-related outcomes, without undue compromise of either. Needless to say, clinical factors like site and volume of residual disease, type of brachytherapy application, imaging modalities utilized, implant geometry, relative doses to the organs at risk, logistics of treatment facility, accessibility, expected compliance, etc., should be considered before adapting the recommended fractionation schedules [10] . Table 1 presents various fractionation schedules, number of applications, biological equivalent doses, and specific pros and cons remarks for each published [16, 20, 21, 22] . The American Brachytherapy Society recommends five to six fractions of 5-6 Gy each, interdigitated with external beam radiation [20] . While this radio-biologically sound schedule and has a strong evidence base a strong evidence base for outcomes, it is limited by the number of necessary applications. International multicenter studies by the EMBRACE group have reported excellent local control and toxicity outcomes with four fractions of 7 Gy, delivered in 2 brachytherapy applications over 1 week, with 2 fractions per application in the majority [16, 23] . The additional advantage of this regimen is that applications can be limited to two, without compromising the number of fractions. Another attractive option is to deliver high doses per fraction for each application. Retrospective and prospective studies have shown larger fraction sizes of 9 Gy in two applications to have inferior local control without a significant impact on overall survival [20, 21] . However, such regimes can still be considered for small volume residual or elderly patients, after the assessment of risks and benefits. Single application with multiple fractions has been practiced for pure interstitial template-based brachytherapy boost in post-operative recurrences, with reasonably good outcomes [24] . Currently, there have been attempts to deliver multiple fractions in a single application for intact cervical cancer, where MRI-based planning was used and 9 Gy was delivered on day 1, and two fractions Even though the feasibility of such regimen has been proven in that study and early disease-related outcomes are encouraging, long-term results regarding toxicity are awaited. Finally, at brachytherapy plan evaluation, if multiple applications are considered necessary to maintain a favorable therapeutic ratio, the brachytherapy applications can be repeated at shorter intervals (twice a week), on a case-by-case basis, to achieve and maintain the optimal overall treatment time. In summary, the restriction of pre-operative admissions, mandatory pre-procedure testing for COVID-19, triaging patients and appropriate management according to test results, adherence to universal precautions, the use of regional anesthesia, reducing the number of applications, and delivering multiple fractions with each application, are some of the practical alternatives that can be explored for delivering essential brachytherapy treatments for cervical cancer patients, without unduly compromising the therapeutic ratio. Such measures should be individualized and customized to the severity of the pandemic in each region, to limit high-risk exposures to patients, health caregivers, and establishments/institutions in the future.
The city of Wuhan in China has gained global attention due to an ongoing outbreak of viral pneumonia associated with a novel coronavirus -coronavirus disease 2019 . Reported illnesses have ranged from people with little to no symptoms to people being severely ill (fever, cough, and shortness of breath) and dying [1] . While the infection closely links to contact with bush meat from wild or captive sources at the Wuhan seafood market, human-to-human transmission has been discovered [2] , indicating the risk of a much wider spread. The situation could be worse because the outbreak happens to occur during Chunyun, a largescale conventional passenger transport within China for the Chinese Lunar New Year. Confirmed infected cases have also been found out of China through flights connecting Wuhan and international cities. As of February 22, 2020, according to World Health Organization (WHO) [3] , China (including Hong Kong, Macau, and Taiwan) has announced 76 392 accumulated cases of laboratory-confirmed COVID-19 pneumonia (including 20 673 cured cases) and 2 348 deaths. Another 1 402 accumulated confirmed cases are reported in 26 countries, territories, and areas out of China, with 11 death reported from Korea, Japan, Philippines, France, and Iran. In order to suppress further infection, the Chinese government and health authorities have been dedicated to a range of response measures, such as locating and quarantining close contacts, blocking Wuhan and a number of other severely infected cities, extending the Lunar New Year vacation, and establishing makeshift hospitals. However, it is found that transmission can occur during the incubation period [4] , which can be as long as 12.5 d [5] with no symptoms. As a result, carriers could have spread the coronavirus nationwide and globally without awareness, and the epidemic picture is still worsening on a daily basis. Under this circumstance, an authoritative analysis of the pneumonia infection is in need to guide life and work schedules, as well as to prevent the public from getting panicked at untrustworthy information sources. This article develops the Delay Differential Epidemic Analyzer (D 2 EA), a mathematical model that depicts the epidemic picture of COVID-19 comprehensively. The D 2 EA begins with a traditional Susceptible-Exposed-Infectious-Recovered (SEIR) epidemic model, where everybody freely contacts with each other. Then we consider the quarantine policies adopted by Chinese health authorities in the later stage and introduce a set of new states (Quarantined) into the model, which makes it capable to represent the true epidemic dynamics. Finally, we consider the potential future variations of three practical factors, namely, the contact rate (during Chunyun return journey), the recovery rate (due to emergence of vaccine or mutation of the virus), and the quarantine rate (associated with dynamics between hospital resource and spread of infection), and analyze how these factors influence the future trends of the epidemic. In the experiment part, we apply the D 2 EA to Hubei Province, the source of the epidemic. The D 2 EA model is fitted to daily epidemic data (infected, recovered, and dead cases) reported by the National Health Commission of the People's Republic of China, the Health Commission of Hubei Province, and the Centers for Dis-ease Control and Prevention (CDC). We also compare the D 2 EA with the traditional SEIR model to illustrate the virtues of quarantine states and reveal the D 2 EA's superiority. Through further sensitivity analysis, we prove that the D 2 EA is not sensitive to some important parameters so that can comprehensively depict the epidemic picture of the COVID-19 pneumonia. Since initial cases of clustered pneumonia infection in Wuhan were first reported to the WHO on December 31, 2019 [6] , worldwide medical researchers have been closely watching the situation and engaged in scientific analysis from a lot of dimensions. While some of them study the genome and protein characteristics of the COVID-19 for a better understanding [7] [8] [9] [10] [11] , others look into its potential primary reservoirs to prevent future zoonotic disease [12] [13] . There are also research works upon medicine treatment [14] , vaccine development [15] , effectiveness of airport screen detecting [16] , etc. Among the various works, those focusing on epidemiologic analysis and transmission dynamics are more related to our work. Among them, Zhao et al. [17] accounted for the impact of the variations in disease reporting rate and modeled the epidemic curve of the COVID-19 cases time series. With the collected data, Wu et al. [18] nowcasted and forecasted the potential domestic and international spread of the COVID-19 outbreak. Riou and Althaus [19] looked into the transmission patterns and indicated clues of human-to-human transmission. Read et al. [20] took the flight connection between Wuhan and other cities into consideration and established a transition model to analyze the transmission dynamics of COVID-19. In principle, the above works exploit typical epidemic models and probabilistic methods to analyze the epidemiologic characteristics of pneumonia statistically. However, most of the above works mainly focus on the early epidemic stage and aim to analyze the intrinsic characteristics of COVID-19 itself. Many essential factors such as the effectiveness of quarantine and the impact of the Chunyun return journey are not quantitatively considered or incorrectly estimated. The SEIR model is frequently referred to in the aforementioned works and other literature. It is a classical mathematical model to analyze the epidemiologic characteristics [21] . By estimating basic reproductive number, incubation period, and other parameters in interest, the SEIR can describe the early stage of epidemics with different seasons, ages, and heterogeneities on virus transmission [22] [23] [24] . However, in most of the real situations, the traditional SEIR model is too simple to forecast future pictures due to its assumption that the individuals freely contact with each other. Some previous works model physical epidemics by adding parameters or compartments into the traditional SEIR and achieve good results. Funk et al. [25] considered infectious of Ebola virus on the dead and split infectious people into those that seek healthcare and those that do not. Wang and Ruan [26] used limited data to simulate the SARS outbreak in Beijing by simplifying the model to a two-compartment suspect-probable model and a single-compartment probable model. Gilberto et al. [27] suggested spatiotemporal models in order to reproduce the time series data and the spread of influenza A. These works go even further and build their models based on reality, but are therefore limited by their specific situations which are not able to be applied to analyze COVID-19. In this section, we introduce the scheme of the D 2 EA model, including the basic assumptions, the main components of model, and the practical factors the model considers. Briefly speaking, the D 2 EA starts with the traditional SEIR model and integrates itself with newly designed quarantine states and corresponding transition rules. The D 2 EA also accounts for potential variations of three factors, namely, the contact rate, the recovery rate, and the quarantine rate. An overall scheme of the D 2 EA model is shown in Fig. 1 , where, the center part is the state transition graph of the model; the left part is the practical factors considered by the model; the right and bottom parts are the optimization and evaluation process the model takes up. The D 2 EA model is established based on the following basic assumptions. (1) The susceptible population is homogeneous in terms of age group. The assumption of homogeneity can largely facilitate our modeling. Although death cases are more likely to be found among elderly people, the spread of infection has not shown an affinity to the population at a certain age yet. In fact, the confirmed cases cover all age groups, so it is valid to make such an assumption. (2) Recovered cases are not going to be infected again. Although it is reported that recovered cases can still be infected, we believe such cases can be neglected because the recovered population will surely pay more attention to their health during the epidemic period, and thus are less likely to get exposed. (3) Birth rate and natural death rate are neglected. According to our survey, the annual birth rate and annual natural death rate of Hubei Province are 1.15% and 0.70%, much smaller than the death rate (3.55%) and cure rate (21.4%) of COVID-19 in Hubei as of February 22, 2020. Furthermore, it can be inferred that the time span of the epidemic will be far less than a year. As a result, the influence of birth rate and natural death rate can be neglected in our model. (4) A quarantined human will not further infect other people. In our modeling, as long as one transfers to a quarantine state, we assume he/she will not infect other people anymore. This assumption makes the D 2 EA model more tractable, and it is also practical in reality. (5) No case dies in the incubation period. There is no report about death in the incubation period yet, and we do not consider this probability. (6) The quarantine duration is long enough. In the D 2 EA model, after a quarantine duration of τ days, close contacts will be divided into positive (infected) cases and negative cases. We assume τ is long enough for the judgment of division to be accurate. As one of the most classic epidemic models, the SEIR models the flows of the population between four states: S (Susceptible), E (Exposed), I (Infectious), and R (Recovered). Since death cases have been reported, we add a new state D (Death) to better depict the epidemic picture and try to predict the increase of death cases. This traditional model is illustrated in Fig. 2 , and the dashed box over which indicates the free spatial flow of the populations. Unlike previous cases of SARS and MERS, the exposed population of the COVID-19 pneumonia is also infectious. This fact urges us to modify the conventional model to adapt it to the real case, and we introduce two parameters, β I and β E to represent the probabilities of disease transmission per contact with the infectious population I(t) and with the exposed population E(t), respectively. We also account for the fact that a few of the exposed cases could recover on their own, and introduce two recovery rates γ I and γ E for I(t) and E(t), respectively. We also introduce the death rate α I for I(t). The ordinary differential equations of this traditional model are therefore shown by Eqs. (1)-(6). They describe the transition between these states and can be easily derived according to Fig. 2 . with However, this traditional-stage model can only describe the early stage of the epidemic, because the SEIR model assumes the individuals freely contact with each other. To make our model more powerful, we need to consider more factors and revise the model. After the outbreak of the COVID-19 pneumonia, the Chinese government and health authorities have been dedicated to a range of prompt response measures, such as locating and quarantining close contacts, blocking Wuhan and a number of other severely infected cities, extending the Lunar New Year vacation, and establishing makeshift hospitals. Therefore, we believe that the most important factor to consider is the influence of the quarantine measures. To account for this factor, we add a range of quarantine states into the model and establish the corresponding differential relations. The revised model is illustrated in Fig. 3 , and the new equation set is explained as follows. To make the explanation clearer, we introduce general symbols X in (t) and X out (t) for state X(X can be S, E, I, R, D, Q S , Q N , Q E , Q P , or Q I ) to represent the total of the population having reached and left state X up to time t respectively. Correspondingly, the derivatives of X in (t) and X out (t) represent the number of humans coming into and leaving state X per unit time. Therefore we have Eq. (7) to express the dynamics of state X. All the equations we derive in the following are based on this basic equation. There are totally 5 quarantine states in the revised model, i.e., Q S , Q N , Q E , Q P , and Q I . When infectious cases of state I are constantly sent to the hospital and get quarantined at rate δ, they transition from state I to state Q P , which represents quarantined suspected and positive cases. Here "positive" refers to being exactly infected with the COVID-19. Conversely, "negative" refers to suffering from suspectable symptoms yet due to other diseases (e.g., the common cold), with which we define state Q N , quarantined suspected but negative cases. We define η as the proportion of humans exhibiting suspectable symptoms yet due to other diseases among all susceptible population and assume these humans can also get quarantined at rate δ. Both Q N and Q P are suspected cases and need time T to be confirmed. Then Q P transitions to Q I , the state of confirmed infected cases, while Q N stops being quarantined and returns to S. Hence humans leaving state Q N (or Q P ) at time t are those who enter Q N (or Q P ) at time t − T , and we have Please notice that these equations are delay differential equations with confirming time T as the delay factor. Meanwhile, humans in state Q I are the confirmed cases being treated in hospital and they will be cured at rate γ or die at rate α. Therefore we have Eqs. (10)-(12) for quarantine state Q N , Q P and Q I . During time dt, δ(t)I(t) cases from infectious population I(t) and δ(t)ηS(t) cases from susceptible population S(t) are quarantined, transitioning to Q P (t) and Q N (t), respectively. At the same time, all the close contacts of them during the last τ days will also be quarantined. These contacts will be quarantined for τ days, where τ is the longest incubation period for the COVID-19. Among the quarantined contacts, those who are not infected with the COVID-19 (Q S ) will return to S and, according to our assumptions, the exposed ones who do have been infected (Q E ) will either become suspected cases (enter Q P ) at rate ε or recover (enter R) at rate γ E . The number of all susceptible humans contacted with one potentially infected person during the last τ days c(t) is estimated by Therefore, we have the number of quarantined humans entering state Q S , Q P , and Q E during time dt: Then the differential equations representing dynamics of quarantine state Q S and Q E are Till now, we have finished designing and quantitatively representing the dynamics of the five new states which take the quarantine measures into consideration. Since we constitute the quarantine states with cases transitioned from the original S, E, I, and R states, the previous traditional equations also require revision. After proper modification, we obtain Eqs. (19)-(29), which are the complete differential equation set for the D 2 EA model. Please notice that the dQ in S (t) dt term is already shown in Eq. (14) . with where X can be S, E, I, R, D, Q S , Q N , Q E , Q P , Q I . Since part of the equations are delay differential equations, we let X(t) = X(0) for t < 0 in Eq. (29), which guarantees the initial condition is continuous on the interval [− max{τ, T }, 0]. E(0) = 1 is because we assume there is only one infected human at first. For convenience, we let C(t) be the primitive function of c(t) and we have To enhance the reliability of the D 2 EA, we consider potential variations of three factors which could bring out great influence, namely, the contact rate, the recovery rate, and the quarantine rate. The variation patterns of these factors and their influences are defined in the followings. Because new data come out every day, in this section we only use the data before February 8, 2020 in Hubei Province as an example to show how to estimate these rates. In our experiment part, we try to use the newest data and reestimate these rates in the same way introduced in this section. Contact rate refers to the number of contacts with one person per unit time, i.e., r(t). While the nationwide passenger transport during Chunyun and its return journey can temporarily enhance r(t), the city blocking instructions from authority can keep r(t) low. We use the idea in the kinetic theory of gases to estimate r(t). We regard a person p as a circle with diameter d and others as points. 0.5d is the longest distance the virus can transmit in air. For COVID-19, d = 3 m. If one person (as a point) enters the circle, we regard this person contacts p. Therefore we can estimate the contact rate r(t) of p: where, u(t) is the ratio of active time (the time duration people stay outside) and total time; n is the average number of humans per area; v is the mean relative velocity of two persons. As for Hubei Province, we have n = 300 person/km 2 . And reasonably, v = 1 km/h and u(0) = 0.25, which means a person goes outside for 6 h every day and moves 1 km every hour on average. Then by Eq. (32), we have r(0) ≈ 5 person/d. Equation (32) gives the general idea of estimation for the contact rate. However, influenced by the city blocking policy and Chunyun, the change of r(t) is complex in recent days. We have to take all these into account. In the following, we let r 1 (t) be the contact rate influenced only by the city blocking policy and r 2 (t) be the rate influenced only by Chunyun. First, we quantitatively estimate the variation of r 1 (t): where, t 1 = 47 d is the index of the day the city of Wuhan is blocked; t 2 is the day when Wuhan is reopened. Initially, r 1 (0) = r(0). Before t 1 , u(t) slowly decreases because the public is more and more wary about the potential epidemic. As u(t) is an average value, it should not decrease too fast, so we let the average active time reduce around 20 s every day. That is the first part in Eq. (33). After t 1 , the contact rate decreases sharply due to powerful authoritative management and control, especially the city blocking policy. Considering that other cities in Hubei Province are also quickly blocked after t 1 , we use exponential function to estimate the changing of r 1 (t) of Hubei Province, and it exponentially decreases to 0 after t 1 . Although t 2 has not come yet, it is reasonable to deduce r 1 (t) will return to normal also exponentially after t 2 . By analyzing the data of passenger flows during Chunyun of the last few years, we find passenger flows will reach the peak for Chunyun and its return journey 4 d before and 10 d after Lunar New Year on average, respectively. We use these results to estimate r 2 (t): where, t a = 43 d and t 4 = 57 d are the index of the day when passenger flows reach the peak for Chunyun and its return journey, respectively. Initially, r 2 (0) = r(0) as well, and we assume that the active time doubles at the peaks, i.e., u(t) = 0.5, and use two Gaussian functions to fit the peaks for Chunyun and its return journey. Finally, we apply Eq. (35) to get our final representation for the variation of contact rate r(t), which is the geometric mean of r 1 (t) and r 2 (t). In terms of the recovery rate, here we mainly discuss the recovery rate of Q I , i.e., γ(t). As time goes by, medical workers will get more familiar with the COVID-19 in terms of their treatment and care. Therefore γ(t) tends to be growing along the time and approaches a certain saturation value. We know that γ(t)Q I (t) is the number of cured cases per unit time at time t, so γ(t) can be estimated by the number of cured cases during t -th day, where · is the down rounding operator. To let γ(t) be a continuous function of t and make the form as simple as possible, we try to fit a Logistic function with the number of cured cases every day and approximate the result with piecewise linear function to get the estimation of γ(t), as illustrated by where, t 5 = 14 d is the length of period during which we have yet no therapy for COVID-19; t 6 = 80d is the day when γ(t) reaches saturation according to our estimation. However, two events could greatly influence γ(t). One is the emergence of the vaccine, which can instantly increase γ(t); the other is the mutation of the virus, which can instantly decrease γ(t) and increase β E , β I , α, and α I . We will briefly analyze the concrete influence these events will cause and offer corresponding suggestions to the public and government in Section 3.4. The quarantine rate δ(t) refers to the speed of people getting quarantined among the infectious population I(t) per unit time. It is associated with the bulk of medical resource, the number of infectious population to be quarantined, and the number of recovered population to leave the hospital. Please notice that δ −1 (t) means the average time from developing symptoms to getting quarantined, which we refer to as the quarantine buffer, we can use everyday quarantine buffer of the cases to estimate δ(t). From official reports, the quarantine buffer in January is around 15 d and around 6 d on February 8. We get the quarantine buffer from the reports and use Segmented Least Squares algorithm to get the piecewise linear estimation for δ(t). The concrete variation pattern of δ(t) is illustrated by where, t 7 = 54 d, t 8 = 80 d, and t 9 = 120 d are the piecewise points derived by the algorithm; δ(t) = 0 when t < 0 means nobody is quarantined before t = 0. We can see that δ(t) increases faster since t 7 . This is due to the sharp increase of infected people in the early stage of the epidemic when the medical resource is still enough. At t 8 , however, the medical resource becomes critical and δ(t) starts decreasing. Finally, at t 9 , a balance is established among limited medical resources, newly infected cases, and recovered cases. t 8 and t 9 have not come yet and they are predicted by the data generated by us. In fact, due to the special measures since February 11, δ(t) increases sharply (from official data, quarantine buffer δ −1 (t) ≈ 2 d), which does not match the result we show here. This does not matter because we just give an example here to show the way we do the estimation. We try to use the latest real data collected from official reports in our experiment to ensure accuracy; moreover, we conduct a sensitive analysis for δ(t) in Section 4 to show that our model is not sensitive to this factor. In this section, we use data from Hubei Province to experiment with the D 2 EA model and analyze the results. Firstly, we set part of the parameters in the D 2 EA by estimating with statistical data, and then we apply non-linear optimization to fit four key parameters that cannot be easily estimated. Our experiment data come from the National Health Commission of the People's Republic of China, the Health Commission of Hubei Province, and the CDC. We collect the number of confirmed cases, recovery, and death before February 20, 2020 as our input. After that, we use the D 2 EA with fitted parameters to make reasonable predictions, evaluate the quarantine measures, and discuss the influence of possible specific medicine (e.g., the vaccine) and mutation of the virus as well. The D 2 EA is associated with a number of parameters whose values are to be assigned. While some of these values can be estimated with statistical data, others have to be obtained by fitting the model to real data. The whole parameters associated with the model are listed in Table 1 . In Table 1 , the parameters with vacant values are to be assigned through model fitting; those with values in parenthesis are variables whose values can change with time, and the assigned values in the parenthesis are the initial or final values; other parameters are constants with values estimated statistically. Concretely, N (0) is the population of Hubei Province; γ I is estimated by the data in preliminary stage of the epidemic; γ E is a small constant since few cases will recover during incubation period; ε −1 is the average incubation period; η is estimated by the number of cases leaving Q N (t) every day; τ is the duration of quarantine; T is the time needed to confirm a suspected case. r(t), γ(t), and δ(t) are estimated by the methods in Section 2.4 using the newest data, especially considering the special measures since February 11, 2020. More concretely, the variations of these factors are shown in Fig. 4 . We see the quarantine rate sharply increases at February 11, 2020 (t = 65 d), the day when clinically diagnosed cases start to be regarded as confirmed cases. In the D 2 EA model, as introduced in Table 1 , there are four parameters to fit. Namely, these parameters are: α, death rate of quarantined confirmed cases Q I ; α I death rate of free infectious population I; β E , probability of transmission per contact with free exposed population E; β I , probability of transmission per contact with free infectious population I. We explore non-linear optimization libraries in MAT-LAB to fit the D 2 EA, represented by the delay differential equation set, with the real data collected from official reports about the ongoing epidemic. Concretely, we constrain the model to Hubei Province to relieve the influence of the spatial heterogeneity, and we collect the daily updated number of accumulated confirmed infected cases, recovered cases, and death cases in Hubei up to February 20, 2020 as our first-handed data. Our experiment data come from the National Health Commission of the People's Republic of China and the CDC. The detailed fitting results are shown in Table 2 . Since the D 2 EA model considers five different quarantine states, it can better describe real situations with quarantine policies and make full use of the data to get better results. We can see that the confidence interval (CI) listed in Table 2 is small, which means the fitting is successful. To this end, we have obtained all the necessary parameter values for the model. In the next stage, we will use the model to evaluate the current situation of the epidemic and make forecasts for the future trends of the ongoing epidemic. The D 2 EA's forecast curves for quarantine infectious population Q I , recovered population R and deaths D are shown in Fig. 5 . The dark red dots along the curves are the up-to-date ground-truth data we collect from the official source mentioned above. From Fig. 5 , we see that our Q I , R, and D curves fit the reported data well. The origin of the x-axis (time-axis) of our prediction curves in Fig. 5 represents December 8, 2019, i.e., the day when the first infected cases were sent to the hospital. When this article is finished (on February 22, 2020), we are on the 76th day after that and the model's prediction value fits well the current number of reported accumulated confirmed cases of Hubei Province, i.e., 63 454 cases. The same index for China and the globe (excluding China) is 76 392 and 1 402. According to the further prediction, the accumulated infected cases in Hubei will reach its peak at the end of February (i.e., around February 29, 2020), increasing to around 65 000, and then steady down. It is clear that the quarantine policies adopted by the Chinese government and health authorities have made a great difference in preventing the epidemic from wildly spreading. A natural question is that, how well have they been doing? To our knowledge, no literature has discussed this topic quantitatively. With the D 2 EA, however, it is easy to do so. In the first place, the D 2 EA offers us with the epidemic prediction curves under the real current circumstance because we explicitly take the factor of quarantine measures into consideration. At this stage, we can also neglect the role of all quarantine states. In fact, without the quarantine states, the D 2 EA degenerates to the classic SEIR model. To evaluate how effective the quarantine measures are, we compare the S, E, I, and R curves between the D 2 EA and the degenerated SEIR. The results are shown in Fig. 6 . For each of the curves in Fig. 6 , the left vertical axis is set for susceptible population S; the right vertical axis is set for exposed population E, infectious population I, and recovered population R. Please notice that the right vertical axis of Fig. 6(a) is 10 4 and that of Fig. 6(b) is 10 5 . What it indicates is that the number of exposed and infectious population is only 1/10 of what it could have been, given that the quarantine measures are conducted. In other words, the quarantine measures taken by the Chinese government and health authorities suppress the expansion of the epidemic within 1/10 of what it would be without any control measures. The quarantine measures are indeed very effective. In Section 2.4, we introduce variation patterns of three practical factors, namely, the contact rate r(t), the recovery rate γ(t), and the quarantine rate δ(t). Section 3.1 depicts the concrete trend estimations with regard to the latest data in Fig. 4 , and we have observations and discussion as follow: (1) While the Chunyun return journey will inevitably bring larger chances of contact, as long as Hubei Province, especially the city of Wuhan is not reopened too early, the influence of Chunyun can be suppressed. In Section 4.1, we will quantitatively discuss what the most appropriate time is for Hubei Province to reopen. Anyway, life in Hubei Province should come back to normal before May. (2) Currently, we can expect to see a higher recovery rate of the COVID-19, as shown in Fig. 4(b) , and it will eventually approach 0.07/d if no external events occur. This trend could be greatly influenced by two events, however, namely, the emergence of specific medicine (e.g., the vaccine) and the mutation of virus. We simulate these two situations and the results are shown in Fig. 7 . For the first event, the emergence of specific medicine can enhance the recovery rate γ(t). We assume the medicine is invented at t = 78 d and let γ(t) increase quickly after that. As simulation shown in Fig. 7(a) , the emergence of specific medicine can accelerate the termination of the epidemic and transfer more cases from state Q I to R. For the second event, if the mutation takes place, the probability of transmission (β E and β I ) and death rate (α I and α) will increase. We assume the mutation takes place at t = 78 d and let the probability of transmission and death rate become higher after that. An increase in the number of deaths can be easily foreseen ( Fig. 7(b) ); however, the scale of infection can still be retained as long as the quarantine measures are properly executed (Fig. 7(c) ). As a result, the wise option for the public to take is to stick to the quarantine policies even when the situation just turns about to be better. Meanwhile, the government should weigh more funds to vaccine production in order to subside this epidemic as early as possible and leave lower expectations for the mutation to take place. (3) Due to limited medical resources, it is impossible for each confirmed case to get quarantined instantly. The smaller the quarantine rate, the longer the epidemic will last. In fact, according to our estimation in Fig. 4(c) , δ(t) approaching 0.5/d reflects a two-day time delay before an infection can actually be quarantined. Considering the severe shortage of medical resources in Hubei Province, especially in Wuhan, although we have forecasted the number of confirmed cases will steady down soon, the overall situation is not totally optimistic. Time to reopen Hubei Province t 2 is very important in the whole model. We assume Wuhan is the last city to be reopened and t 2 is just the time to reopen Wuhan. If the authority stops blocking Wuhan too early (i.e., t 2 is too small), the epidemic will be likely to break out again. If it is too late, the economic loss will be huge. Hence the government should carefully choose t 2 to minimize the loss, and t 2 will greatly influence the results of our model. To find out the best value of t 2 , we change the value of it and conduct several simulations. The result is shown in Fig. 8(a) . We can see the best time to reopen the province is at least 100 d after the outbreak. That is to say, the authority should reopen Hubei Province after March 16, 2020. For prudential reasons, we recommend that Hubei, especially the city of Wuhan, should be reopened no earlier than in late March, or the epidemic may break out again with a high chance. Quarantine rate δ(t) is associated with the population size of Q S , Q E , Q N , Q P , and Q I . If the quarantine rate is high, it means the government can quickly quarantine infected cases and thus more infection can be avoided. However, the quarantine rate cannot grow very high because the medical resource is limited. We believe δ(t) will finally reach an equilibrium point δ(+∞). This point is decided by the speed of resource being sent to the epidemic area, resource being released by humans having finished quarantine, and the demands of newly confirmed infected cases. However, due to the special measures on February 11, 2020, the changing of δ(+∞) is difficult to estimate accurately. In our previous experiment, we assume δ(t) reaches δ(+∞) after February 12, 2020 and set δ(+∞) = 0.5/d, which is derived by the quarantine buffer of 2 d but is likely to be inaccurate. To analyze the sensitivity of the model towards different values of δ(+∞) we change the expression of δ(+∞) and conduct corresponding simulations. The results are shown in Fig. 8(b) , where δ(+∞) = 5/d means the quarantine buffer is 0.2 d. We can see that the model is not sensitive to δ(+∞) as the accumulated number of Q I does not change significantly. In this article, we introduce our model, the D 2 EA (Delay Differential Epidemic Analyser) to depict the epidemic picture of the ongoing COVID-19 pneumonia. The D 2 EA is revised from the SEIR model, with ample consideration into the quarantine measures and variation of practical factors. Through comparison and analysis, we see that the D 2 EA model depicts a com-prehensive picture of the ongoing epidemic, and thus its forecast is reliable. In our experiment part, we fit the D 2 EA to the collected data from official reports of the epidemic in Hubei Province, the source of the epidemic, using nonlinear optimization methods and conduct forecast of the epidemic. According to the D 2 EA's forecast, accumulated confirmed infected cases in Hubei will reach the peak at the end of February and then steady down. We also quantify that the currently adopted quarantine measures keep the epidemic 1/10 the scale it could have been, and recommend that Hubei Province, especially the city of Wuhan relieves the lockdown state no earlier than in late March.
landscape scale and in urban areas. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. . https://doi.org/10.1101/2020.09.07.20189597 doi: medRxiv preprint COVID-19 disease landscapes 3 Author summary 52 The spread of infectious diseases is the outcome of contact patterns and involves source-sink 53 dynamics of how infectious individuals spread the disease through pools of susceptible 54 individuals. Control strategies that aim to reduce disease spread often need to accept ongoing 55 transmission chains and therefore, may not work equally well in different scenarios of how 56 individuals and populations are connected to each other. To understand the efficacy of 57 different control strategies to contain the spread of COVID19 across gradients of urban and 58 rural populations, we simulated a large range of different control strategies in response to 59 regional COVID19 outbreaks, involving regional lockdown and the isolation individuals that 60 express symptoms and those that developed not symptoms but may contribute to disease 61 transmission. Our results suggest that isolation of asymptomatic individuals through intensive 62 test-and-tracing is important for efficiently reducing the epidemic size. Regional lockdowns 63 and the isolation of symptomatic cases only are of limited efficacy for reducing the epidemic 64 size, unless overall transmission rate is kept persistently low. Moreover, we found high 65 overall transmission rates to result in relatively larger epidemics in urban than in rural 66 communities for these control strategies, emphasising the importance of keeping transmission 67 rates constantly low in addition to regional measures to avoid the disease spread at large 68 scale. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. . In the absence of a vaccine against COVID-19 during the initial pandemic phase, 79 stakeholders are confronted with challenging decision-making to balance constraints of social pandemic spread, a central aim is to reduce case incidence in order to release the pressure on 88 health systems. A more fundamental, long-term, goal should be to reduce the overall 89 epidemic size and allow particularly those most prone to suffer from the disease to escape 90 infection until a pharmaceutical measure such as a vaccine is in place. Control strategies are likely to be regional, and temporal, aiming to reduce the time-93 dependent reproduction number R, while accepting that ongoing transmission is long term. 94 But how should these regional and temporary strategies account for disease spread in ever- The spread of infectious disease is rarely random. It is instead likely driven by the complex 101 and heterogeneous social interaction patterns of humans and the stark gradient between urban 102 . CC-BY-NC-ND 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. [3] [4] [5] . Heterogeneity in contact patters of individuals and among social groups is 107 also assumed to impact the depletion of the pool of susceptible individuals and the build-up 108 of possible herd immunity that prevent further spread [6, 7] . Hence, future short-and long-109 term mitigation strategies that focus on managing regional and erratic outbreaks would 110 benefit from a better understanding of which control strategies provide the best possible 111 outcome under variable regional conditions. Our modelling approach is strategic, in contrast to many tactical COVID-19 simulation 120 models that have focused on replication of specific characteristics of real outbreaks with the 121 aim of predicting the epidemic in specific locations [1, 9, 10] . Rather than modelling a certain 122 scenario, we aim to define wide ranges and explore the model behaviour across a large array is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. 130 In order provide an empirical basis to explore possible COVID-19 spread across an urban-131 rural gradient and the efficacy of different disease control measures, we selected four 132 counties in southwestern Wales (Pembrokeshire, Carmarthenshire, Swansea, Neath Port 133 Talbot) with a total human population size of 701,995 (hereafter termed 'metapopulation') 134 dispersed over an area of 4,811 km 2 as a case study. This area was selected because of its 135 strong urban-rural gradient, from city centres to sparsely occupied farming localities, and 136 readily available demographic data. We used a gravity model to define the connections between populations, as it is capable of 144 reflecting the connectivity underpinning landscape-scale epidemics [11, 12] . In particular, a is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. if the attractant population is closely surrounded by others; Fig S1) . The scaling factor  (0  158   1) is a sampled parameter that may vary across scenarios, accounting for the uncertainty 159 in population connectivity. For each population i, we computed a regional gravity index (with is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. . https://doi.org/10.1101/2020.09.07.20189597 doi: medRxiv preprint COVID-19 disease landscapes where  is the disease transmission parameter, and k is a scaling factor of infectiousness of 203 asymptomatic relative to infectious individuals with 0 < i,t < 1. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. . https://doi.org/10.1101/2020.09.07.20189597 doi: medRxiv preprint encounters between citizens and higher contact frequencies between individuals of the same 226 community in urban areas [14] . iii) Regional temporary reduction of transmission rates ('regional lockdown') in response to a 242 regional outbreak within the modelled LSOA administrative units, with four parameters to 243 vary for decision making and control: (1) a threshold  defining the proportion of the is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. To be able to assess the efficacy of these control strategies as compared to a reference, we Table S1 for ranges of parameter 267 values used. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. from Spearman rank correlation) between the regional relative epidemic size and the 289 respective regional gravity index ('urban-rural gradient in relative epidemic size') in order to 290 explore whether control strategies varied in their efficacy across urban-rural gradients. A 291 strong positive correlation can be interpreted as a strong urban-rural gradient of disease 292 spread, with smaller relative epidemic sizes in rural areas, where connectivity is generally 293 lower. We also computed the strength of correlation between the epidemic sizes of baseline 294 scenarios (uncontrolled outbreaks) and the respective regional gravity index. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. . https://doi.org/10.1101/2020.09.07.20189597 doi: medRxiv preprint COVID-19 disease landscapes 13 terms of direction of effects (i.e. decrease/increase in relative epidemic size, reflecting 300 higher/lower control efficacy) and relative influence (i.e. % of variance explained by various 301 parameters in the corresponding BRT model) for those parameters that appear to show 302 'significant' effects in both GLM and BRT (i.e. GLM coefficients clearly distinct from zero, 303 relative parameter influence > 5%). The urban-rural gradient in epidemic sizes (expressed as rank correlation coefficient between 309 the regional epidemic size and the regional gravity index) considerably decreased among is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. Fig 2) . Regional lockdown scenarios appeared to be of limited efficacy in our simulations (Fig 1) 335 and largely depend on small transmission parameters (, 70% relative influence) (Fig 2) . 336 Their efficacy was sensitive to the regional threshold levels for lockdown implementation (, 337 10% relative influence) and lockdown stringency (, 6% relative influence). A reduction of 338 relative epidemic sizes to 5% of those of the respective baseline scenarios through regional 339 lockdowns was only achieved for regional lockdown threshold levels of  1% the populations 340 being symptomatic. 343 The strength of the urban-rural gradient in relative epidemic sizes resulting from isolation of is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. . https://doi.org/10.1101/2020.09.07.20189597 doi: medRxiv preprint contained in urban environments (i.e. resulting in less strong urban-rural gradients in relative 350 epidemic size), despite a concentration of cases there, as depicted by mostly positive 351 correlation coefficients of the urban-rural gradient in relative epidemic size (Fig 4) . In response to regional lockdown strategies, the strength of the urban-rural gradient in 365 relative epidemic size increased with increasing transmission parameters (, 34% relative 366 influence), increasing travel frequencies (27% relative influence), and stronger distance 367 weighting in the underlying gravity model (, 18% relative influence, Fig 3) . is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. . https://doi.org/10.1101/2020.09.07.20189597 doi: medRxiv preprint unless overall transmission rate is kept persistently low. Isolation of non-symptomatic 375 infected individuals, which may be detected by effective test and trace approaches, is pivotal 376 to reduce overall epidemic size over a wider range of transmission scenarios. By considering 377 an 'urban-rural epidemic gradient' as the strength of correlation between regional epidemic is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. In practice, the prominent example of the locally restricted lockdown implemented in the city 401 of Leicester in the UK, which began in June 2020 is just one example of mounting evidence 402 that regional lockdowns do not necessarily see an reduction in disease transmission during 403 the following weeks [25] , which would ideally prevent spread of the virus beyond the local 404 context. This slow response of incidence decline following regional lockdowns is in line with Surprisingly, we found travel frequency and possible density dependence in contact 409 frequency to have rather small relative impact on overall epidemic size compared to the 410 transmission parameter (Fig 2) . Despite the recognised importance of connectivity, travel We found the magnitude of transmission rate to also determine the success of different 419 control strategies in urban versus rural areas, leading to varying urban-rural epidemic 420 gradients in response to varying transmission rates and different control strategies (Fig 3) . 421 For interventions focused on isolating both non-symptomatic and symptomatic individuals 422 and regional lockdowns, our results reveal the strongest urban-rural epidemic gradients at is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. . https://doi.org/10.1101/2020.09.07.20189597 doi: medRxiv preprint under these conditions. These results suggest that at high transmission rates, the urban-rural 425 epidemic gradient is enforced by the overall poorly curbed disease spread at metapopulation 426 level (see Fig 4) . Conversely, we found the urban-rural gradient in epidemic sizes to be 427 mostly masked at high transmission rates for measures targeted at symptomatics only, 428 suggesting that that these measures (which are generally of moderate to low efficacy), would 429 not contain disease spread at metapopulation level unless transmission rates are kept 430 constantly low (see Fig 4) . Exploring such effects warrants further investigation based on is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. Acknowledgments 476 We acknowledge the support of funding from the Welsh Government for this project, and 477 also the Supercomputing Wales project, which is part-funded by the European Regional is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. On the use of human mobility proxies for modeling epidemics. PLoS Comp Biol. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 9, 2020. . https://doi.org/10.1101/2020.09.07.20189597 doi: medRxiv preprint
Porcine epidemic diarrhea virus (PEDV), an enveloped, single-stranded and positive-sense RNA virus, is the causative agent of porcine epidemic diarrhea (PED) [1, 2] . PEDV belongs to the Alphacoronavirus genus, the family of Coronaviridae and the subfamily of Coronavirinae [3] . PEDV encodes several structural proteins, including the spike (S), envelope (E), membrane (M), and nucleoprotein (N) [4] [5] [6] . PEDV infection causes 80 to 100% fatality rates in suckling piglets [7] . It was originally reported in Belgium and the United Kingdom [8] . PEDV outbreaks emerged in the United States in 2013 [9] [10] [11] . In 2014, PEDV swept through three farms in South-Western Germany [12] . More importantly, PEDV has been proposed to be a potential threat to other species, especially humans [13] . Currently, no antiviral drugs are available to control the infection of PEDV. Glycyrrhizin (GLY) is the major component of licorice root extracts, which is the most intensively investigated bioactive compound in licorice root (Glycyrrhiza Radix) [14] . GLY is a glycosylated saponin, containing one molecule of glycyrretinic acid and two molecules of glucuronic acid [15, 16] . It has been used as a traditional Chinese medicinal herb for treating hepatitis because of its anti-inflammatory properties [17] . GLY possesses several beneficial activities including expectorant, anti-ulcer, antiallergy, anti-coagulative, anti-oxidative, antiviral, anti-tumor, and anti-inflammatory activities [18] [19] [20] [21] . The mechanism of how GLY exerts these diverse effects remains largely unclear. The antiviral effect of GLY has been reported previously on various viruses, such as human cytomegalovirus [22] , influenza virus [23] , severe acute respiratory syndrome [24] , herpes simplex type 1 [25] , hepatitis A [16] and B virus [26] . However, an antiviral effect of GLY against PEDV has not yet been reported. GLY is a competitive inhibitor of high mobility group box1 (HMGB1) that can inhibit the cytokine activity of HMGB1 [27] . Extracellular HMGB1 functions as a damage-associated molecular pattern (DAMP) molecule and activates proinflammatory signaling pathways by activating pattern recognition receptors including Toll-like receptors-2, -4 [28, 29] and the receptor for advanced glycation endproducts (RAGE) [30, 31] . HMGB1 is a unique mediator of innate immune responses and inflammation-associated events [32] [33] [34] [35] . In addition, extracellular HMGB1 contributes to the pathogenesis of various chronic inflammatory and autoimmune diseases [36] [37] [38] [39] [40] . In our studies, we first explored the antiviral effect of GLY against PEDV infection. Next, we revealed that GLY inhibited entry and replication of PEDV. In addition, we demonstrated that GLY also decreased the mRNA levels of proinflammatory cytokines. We also confirmed that TLR4 and RAGE (receptors for HMGB1) might be associated with PEDV infection-related pathogenesis. The African green monkey kidney cell line (Vero) was cultured in high-glucose Dulbecco's modified Eagle's medium (DMEM, Invitrogen) supplemented with 10% newborn calf serum (16010-159, GiBCO). Porcine epidemic diarrhea virus (strain HLJBY) was propagated in Vero cells in DMEM supplemented with 2% newborn calf serum. PEDV HLJBY strain was isolated from the feces of piglets suffering from severe diarrhea and was propagated in Vero cells [41] . PEDV was adapted for eight passages in Vero cells. PEDV was inactivated by UV light exposure [42] . The loss of infectivity of UV-inactivated virus was confirmed using the plaque formation assay. Glycyrrhizin was purchased from Sigma. Small interfering RNAs (siRNAs) were purchased from Biotend (China). The sequence of the siRNA specifically targeting RAGE was 5 0 -GCCGGAAAUUAUAGAUUCUdTdT-3 0 , and the negative control siRNA was 5 0 -UUCUCCGAACGUGU CACGUTT-3 0 . Antibodies against RAGE were obtained from Cell Signaling Technology. The polyclonal antibody for the PEDV-N protein was previously generated in our lab. In order to construct a HA-tagged HMGB1 protein-expressing plasmid, HMGB1 was first amplified using PCR with specific primers (Table 1) carrying EcoRI and XhoI restriction sites in the forward and reverse primers. The PCR product was digested with EcoRI and XhoI and ligated into pCAGGS-HA (PCA), previously digested with the same enzymes for 16 h at 4°C. The HMGB1 mutants HMGB1-C45S(C45S), HMGB1h-C106S(C106S), and HMGB1-C45S/C106S (C45S/C106S) were prepared with MutExpressÒ II Fast Mutagenesis kit (Vazyme, China) using specific primers (Table 1) according to the manufacturer's instructions using pCAGGS-HMGB1 as the template. Vero cells were grown to 50-60% confluency in 6-well cell culture plates and then transiently transfected with siRNAs targeting RAGE (siRAGE) using Lipofectamine 2000. The silencing efficiency of siRNA was analysed by western blotting and qRT-PCR. The control non-targeted siRNA (NC) was used as the negative control. To assess the antiviral effect of GLY against PEDV infection, Vero cells were treated with different concentrations of GLY (diluted with DMEM medium supplemented with 2% newborn calf serum) for 2 h. The cells were then washed with PBS for three times before being infected with PEDV (multiplicity of infection (MOI)=0.1, 1 or 10) for 24 h in the presence of different concentrations of GLY. The supernatant was collected for a plaque formation assay, and the cells were collected for western blot or qRT-PCR analysis. Vero cells were first grown in a 6-well plate to 70-80% confluency and then incubated with GLY (0.1-0.8 mM) at 37°C for 2 h. Next, the cells were incubated with UVinactivated PEDV (MOI =1) at 4°C for 1 h before onehour incubation at 37°C. The cells were then washed with the citric acid solution (40 mM citric acid, 10 mM KCl, 135 mM NaCl, pH 3.0) for 3 times to remove un-internalized virus particles. The cells were next washed with PBS 3 times. Total protein was then prepared from these Vero cells for western blot analysis. To assess the inhibitory effect of GLY on PEDV replication, Vero cells were seeded a 6-well plate and then infected with PEDV (MOI=0.1) for 1 h at 37°C. The cells were washed with PBS for three times before being incubated with fresh DMEM supplemented with 2% newborn calf serum in the presence of GLY (0.4 mM). The cells were collected at 4, 8, 12 h post infection (hpi) for western blot or qRT-PCR analysis. To explore the effect of GLY on virus assembly, Vero cells were infected with PEDV in the presence of GLY at 37°C for 24 h. The supernatant and cells were collected separately for qRT-PCR analysis. The ratio between ORF3 RNA levels in the supernatant and in the cells was used as the index for virus assembly. To explore the role of GLY on virus release, virus titers in the supernatant and in the cells was determined using a plaque formation assay. The ratio between virus titers in the supernatant and in the cells was used as an index of virus release [42] . Vero cells were seeded in a 24-well plate. The cells were pre-treated with different concentrations of GLY for 2 h before PEDV infection (MOI=0.1, 1 h) in the presence of different concentrations of GLY. The cells were further incubated for 5, 11, or 23 h in the presence of GLY before harvest. The mRNA levels of proinflammatory cytokines were determined by qRT-PCR. Vero cells were washed with 1 ml ice-cold PBS for three times. The cells were lyzed with 100 ll 29 SDS loading buffer. Proteins were subjected to SDS-PAGE before being transferred to polyvinylidene fluoride (PVDF) membrane. The membrane was blocked with 3% BSA in PBST (4.3 mM Na 2 HPO 4 , 1.4 mM KH 2 PO 4 , pH 7.4, 137 mM NaCl, 2.7 mM KCl, 0.05% Tween-20) for 1 h at room temperature, followed by incubation for 2 h with the appropriate primary antibody (anti-PEDV-N, actin or RAGE). After extensive washing with PBST, the membranes were further incubated for 1 h with the appropriate secondary antibody (HRP-anti-rabbit IgG or HRP-anti-mouse IgG). Immunoreactive bands were detected by an ECL enhanced chemiluminescence system (Biouniquer, China) and analyzed using ImageJ software. Total RNA was extracted and purified from cells using TRIZOL reagent (Invitrogen). The reverse transcription and qRT-PCR were performed according to the previously described method [43] . The primer sequences for qRT-PCR or cloning (IL-1b, IL-6, IL-8, TNF-a, GADPH, ORF3 of PEDV) are listed in Table 1 . GAPDH was used as the internal control. The relative expression levels of the detected genes were compared with that of the GAPDH gene by the 2 -DDCt method. Each qRT-PCR assay was performed in triplicate. Virus culture supernatant was 10-fold diluted (from 10 2 to 10 5 ) and added to 6-well plates with a confluent monolayer of cells for 1 h before overlay medium (2.5% low melting point agarose in DMEM medium containing 4% newborn calf serum) was added to each well. The cells were further incubated at 37°C with 5% CO 2 for 3 days before being stained with 0.5% crystal violet. Approximately 2910 4 Vero cells per well were added to a 96-well cell culture plate and cultured for 24 h at 37°C in the presence of 5% CO 2 . The medium was replaced with fresh DMEM (supplemented with 2% newborn calf serum) in the presence of inhibitors (GLY). The plates were incubated for up to 24 h. The cytotoxicity was assayed by measuring lactate dehydrogenase (LDH) released from the cells using Cytotox-One homogenous membrane integrity kit (Promega, USA), according to the manufacturer's instruction. All data were determined in triplicate and are representative of at least two separate experiments. The results represent the means ± standard deviations of each triplicate data set. The differences between means were considered to be significant at * p \ 0.05, and very significant at ** p \ 0.01. All analyses were performed by one-way ANOVA using the SPSS software package (version 16.0, SPSS Inc., Chicago, IL, USA). The antiviral activity of glycyrrhizin (GLY) has been reported previously on several viruses. To assess the antiviral effect of GLY against PEDV infection, Vero cells were treated with different concentrations of GLY for 2 h before PEDV infection (MOI=0.1). The infected cells were further incubated for 24 h in the presence of GLY before western blotting analysis, which showed that PEDV-N protein expression was moderately reduced in a dose-dependent manner (Fig. 1A) , demonstrating the antiviral activity of GLY against PEDV infection in Vero cells. The antiviral activity of GLY was further confirmed by qRT-PCR assay, which showed that GLY treatment resulted in, approximately, a 70% reduction of viral ORF3 gene expression at a concentration of 0.8 mM (Fig. 1B) . A similar dose-dependent inhibition of virus infection was observed in the plaque formation assay (Fig. 1C) . The cytotoxicity experiment showed GLY did not cause significant cytotoxic effects in Vero cells (at concentrations up to 0.8 mM for 24 h) (Fig. 1D ). In summary, our data established the antiviral activity of GLY against PEDV infection in Vero cells. In addition, the antiviral activities of GLY were investigated at two MOIs (1 and 10), which indicated that GLY had improved inhibitory effects on PEDV infection when cells were infected at a lower MOI (Fig. 1E) . The effect of GLY on PEDV entry, replication, assembly and release To investigate the effect of GLY on PEDV entry Vero cells, pre-treated with GLY (37°C, 2 h), were infected with UV-inactivated PEDV (MOI=1) for 1 h at 4°C before 1 h incubation at 37°C. The un-internalized PEDV was washed away with the citric acid solution. The cells were immediately subjected to western blot analysis of PEDV-N protein. The result revealed that PEDV-N protein levels were decreased in a dose-dependent manner ( Fig. 2A) . Therefore, our results suggested that virus entry was affected by GLY. To determine whether GLY has an inhibitory effect on PEDV replication, Vero cells were incubated at 37°C in the presence of GLY (0.4 mM) after Vero cells had already been infected with PEDV (MOI=0.1, 1 h, 37°C) . PEDV-N protein levels in the infected cells were analyzed at 4, 8 and 12 hpi by western blot (Fig. 2B) . The result showed PEDV-N expression was inhibited by GLY. In addition, viral ORF3 RNA levels were also decreased after GLY treatment, as demonstrated by qRT-PCR analysis at 4 hpi (*25%), 8 hpi (*33.3%) and 12 hpi (54.5%) (Fig. 2C) . The results suggested that GLY inhibited the replication of PEDV. To study the effect of GLY on virus assembly, we analyzed RNA levels of the PEDV-ORF3 gene in supernatant and in cells. Vero cells were incubated at 37°C for 24 h with different concentrations of GLY after Vero cells had been infected with PEDV (1 h). The supernatant and the cells were collected for qRT-PCR analysis. The ratio between ORF3 RNA levels in the supernatant and in the cells was similar between GLY-treated and mock-treated samples (data not shown), indicating that GLY might not affect PEDV assembly. To determine whether GLY affected virus release, virus titers in the supernatant and in cells were determined using a plaque formation assay. Vero cells were incubated with different concentrations of GLY after Vero cells had already been infected. The cells were freeze-thawed three times after PBS washing. The plaque formation assay revealed the virus titer ratio between supernatant and cells was similar between GLY-treated and mock-treated cells (data not shown), suggesting that GLY might not affect virus release either. Low amounts of proinflammatory cytokines may be protective against viral invasion. However, overproduced cytokines will sabotage the host immune responses [44] . It is reported that host cells initiate immune responses by producing various proinflammatory cytokines during the infection of various viruses, including West Nile virus [45] , SARS-CoV [46] [47] [48] , and hepatitis (A, B, C) viruses [49] . Therefore, we studied whether GLY treatment affected the levels of proinflammatory cytokine mRNAs during PEDV infection. Our data showed that PEDV infection increased the mRNA levels of the proinflammatory cytokines IL-1b, IL-6, IL-8, TNF-a, while GLY treatment decreased the mRNA levels of these cytokines at 6, 12 and 24 hpi: IL-1b (20%, 49%, 75%), IL-6 (39%, 53%, 80%), IL-8 (46%, 47%, 94%), and TNF-a (51%, 56%, 91%) (Fig. 3A, B and C) . Since we had previously found that GLY treatment affected PEDV entry, we performed another experiment to rule out the possibility that the effect of GLY on proinflammatory cytokines might be caused by a decrease in the MOI. The cells were first infected with PEDV for 1 h to enable virus entry. Unbound viruses were then removed using the citric acid solution. The infected cells were then incubated with GLY before the first round of virus release into the extracellular environment (6 h). We analyzed the levels of proinflammatory cytokine mRNAs at 4 hpi by qRT-PCR, and revealed that GLY indeed decreased the levels of proinflammatory cytokine mRNAs: namely IL-1b (12%), IL-6 (34%), IL-8 (33%), and TNF-a (41%) (Fig. 3D ). All these results suggest that GLY treatment attenuated the proinflammatory responses of the cells during virus infection. GLY is a competitive inhibitor of High Mobility Group Box-1 (HMGB1). Many studies have found that HMGB1 induces proinflammatory cytokine expression through the TLR4 signal pathway. When HMGB1 exerts its effect through TLR4, a disulfide bond forms between Cys23 and Cys45 [50] and a reduced Cys106 in HMGB1 is required [51] . We therefore constructed three HMGB1 mutants to investigate their effects on virus infection. Vero cells were transfected for 12 h with HMGB1 mutant plasmids HMGB1-C45S, HMGB1-C106S, HMGB1-C45S/C106S or control plasmid pCAGGS-HA (PCA) before PEDV infection (MOI=0.1). The cells were collected at 24 hpi for western blot analysis. The expression of HMGB1-C45S, C106S and C45S/C106S was confirmed (shown in Fig. 4A ). PEDV-N protein expression levels decreased about 29%, 20%, and 48% in HMGB1-C45S, C106S, and C45S/C106S over-expressing cells, respectively (Fig. 4A) . The RNA levels of the viral ORF3 gene were also decreased by approximately 51%, 20%, and 65% in HMGB1-C45S, C106S, and C45S/C106S over-expressing cells, respectively (Fig. 4B ). In addition, the effect of the double mutant on mRNA levels of IL-1b, IL-8, and TNF-a was more significant when compared to the single mutants (Fig. 4C) . RAGE is one of the main receptors of HMGB1 [52] . We knocked down RAGE expression by siRNA to determine the influence of RAGE on PEDV infection. As expected, a decline in PEDV-N expression and PEDV ORF3 RNA levels (62%) was observed after RAGE knockdown (85% knockdown efficiency) (Fig. 5A, B, C) . We determined the effect of RAGE knockdown on infection using the plaque formation assay, which showed that the virus titer in the supernatant was decreased (Fig. 5D) . Furthermore, siRAGE treatment significantly reduced the levels of IL-1b (23%), IL-6 (22%), IL-8 (25%), and TNF-a (52%) mRNA, when compared to NC-treated cells (Fig. 5E ). Based on these experiments, we concluded that HMGB1 might exert its biological function through TLR4 and RAGE during PEDV infection in Vero cells. Glycyrrhizin (GLY), the main component of licorice root extracts, inhibits the infection of many viruses. In our studies, we revealed that GLY could moderately inhibit PEDV infection in Vero cells (Fig. 1) . It was reported previously that GLY affects porcine reproductive and respiratory syndrome virus (PRRSV) entry [53] and inhibits the replication of SARS-CoV in vitro [24] . In our studies, we also demonstrated that GLY inhibited the entry and replication of PEDV, but had no effects on virus assembly and release. GLY is a competitive inhibitor of high mobility group box1 (HMGB1) that can inhibit the cytokine activity of HMGB1. Our previous studies show that PEDV infection results in the acetylation and release of HMGB1, which would promote the release of proinflammatory cytokines [54] . In this study, we demonstrated that GLY inhibited the increase in proinflammatory cytokines induced by PEDV infection (at the mRNA level) (Fig. 3A, B, C) . A similar result was observed in infected cells which were treated with GLY after virus internalization (Fig. 3D) . HMGB1 binding to TLR4 to trigger cytokine release requires the reduced C106, and a disulfide bond between C23 and C45 in HMGB1 [55] . Our studies on HMGB1 mutants (C45S, C106S, and C45S/106S) corroborates that extracellular HMGB1 binding to TLR4 promotes inflammatory responses (Fig. 4) , implying that the correct redox state of HMGB1 is essential for its cytokine activity. We also confirmed the involvement of RAGE in PEDV pathogenesis using a RAGE knockdown experiment (Fig. 5) . Our studies suggest GLY could be used as an immunomodulatory agent against PEDV infection because our in vitro experiments showed that PEDV infection results in a significant increase in proinflammatory cytokines, whereas GLY treatment attenuated the production of these cytokines accompanied by a decrease in virus infectivity. An animal experiment shows that suckling pigs and weaned pigs infected by PEDV release a large amount of TNF-a at different time points, while serum IL-8 levels were, remarkably, higher in infected weaned pigs when compared to infected suckling pigs [56] . Although the study did not determine the expression levels of other proinflammatory cytokines, we suspect PEDV infection might cause the un-controlled release of cytokines in pigs. The aberrant release of cytokines has been suggested to play a role in the pathogenesis of diarrhea. Proinflammatory and anti-inflammatory cytokine production locally, or in other organs, induces inflammation and cellular infiltration in to the lamina propria and other layers of the intestinal wall, which subsequently causes diarrhea and finally dehydration [57] . Therefore, we propose that manipulation of the proinflammatory responses by a chemical agent such as GLY will attenuate the severe impact of PEDV infection on animals. GLY has been shown to protect vital organs against porcine endotoxemia through modulation of systemic inflammatory responses, by reducing the protein and mRNA levels of HMGB1 and other pro-inflammatory cytokines [58] . It is known that administration of large amount of GLY (licorice extract) causes hypokalemia and serious hypertension in both animals and humans [59, 60] , but these effects are reversible after GLY withdrawal [61] . Hence, short-term administration of GLY or its derivatives might not lead to significant harm to animals. Collectively speaking, our study suggested GLY might be used as an immunomodulatory agent to attenuate the severe clinical symptoms in pigs infected by PEDV.
Sustainability is an urban development priority. Thus, energy and carbon dioxide emission reduction is becoming more significant in the sustainability of urban transportation systems. However, urban transportation systems are complex and involve social, economic, and environmental aspects. We present solutions for a sustainable urban transportation system by establishing a simplified system dynamics model with a timeframe of 30 years (from 1995 to 2025) to simulate the effects of urban transportation management policies and to explore their potential in reducing vehicular fuel consumption and mitigating CO 2 emissions. Kaohsiung City was selected as a case study because it is the second largest metropolis in Taiwan and is an important industrial center. Three policies are examined in the study including fuel tax, motorcycle parking management, and free bus service. Simulation results indicate that both the fuel tax and motorcycle parking management policies are suggested as potentially the most effective methods for restraining the growth of the number of private vehicles, the amount of fuel consumption, and CO 2 emissions. We also conducted a synthetic policy consisting of all policies which outperforms the three individual policies. The conclusions of this study can assist urban transport planners in designing appropriate urban transport management strategies and can assist transport operation agencies in creating operational strategies to reduce their energy consumption and CO 2 emissions. The proposed approach should be generalized in other cities to develop an appropriate model to understand the various effects of policies on energy and CO 2 emissions. Ó 2015 Elsevier Ltd. All rights reserved. Sustainable development has become a worldwide priority. Sustainable development is viewed as the development that meets the current needs without compromising the ability of future generations to meet their own needs [1] . The transportation sector is important as it relates to sustainability because this sector supports the economy and most social activities and has substantial environmental impact [2] . Thus, a well-established urban transportation system should not only harmonize economic growth with land-use planning and promote the use of public transit systems but also conserve resources and be environmentally friendly [3] [4] [5] . According to Key World Energy Statistics [6] , the aggregate energy demand of the global transport system increased from 23% in 1973 to 28% in 2012. The World Energy Outlook [7] reported that the transportation sector will account for 30% of the growth in petroleum consumption between 2004 and 2030. This finding indicates that the increasing use of motor vehicles will accelerate resource exhaustion and global warming, despite its promotion of road transportation mobility. In Taiwan, the road transportation system not only facilitates the mobility of people and goods over space and time but also is essential for the industrial and economic development of Taiwan's trade-oriented economy. According to Taiwan's Statistical Abstract of Transportation and Communications [8] , the number of registered vehicles in Taiwan rose from 4.7 million in 1980 to 21.3 million in September 2014. This rise was a consequence of the increase in individual disposable income, the opening of the first national north-south expressway in 1978, and the subsequent improvement of the highway infrastructure: a second national north-south expressway, a west coast highway, and an east-west highway, among others. Along with the rapid growth of the number of motor vehicles, energy consumption in the road transportation sector reached the equivalent of 13,272 kl of oil in 2013, which was 3.37 times higher than that in 1980, and accounted for 95.75% of aggregate transport fuel demand. The amount of CO 2 emitted in the road transportation system increased at an annual rate of 4.38% per annum, from 8.2 million tons in 1980 to an estimated 33.7 million tons in 2013. Under the pressure of global warming and significant great fluctuations in fuel prices, we face issues related to humanity-oriented transportation, energy conservation, and CO 2 mitigation, which have already become important topics in transportation planning and management. The Ministry of Transportation and Communications (MOTC) in Taiwan had invested NT$ 15 billion from 2010 to 2012 to reduce the number of private vehicles driven and the amount of fuel consumption and CO 2 emissions through the use of public transportation promotion programs. Many academic works focused on CO 2 emission and energy consumption in the urban system context [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] . However, interactions between various transportation subsystems are not considered. Moreover, a systematic approach covering more aspects of the urban air pollution problem is still lacking to examine the effects of various transport policies. An urban transportation system is complex and involves a variety of social, economic, and environmental issues. Interpreting the inherent mechanisms of the system and capturing the dynamic behavior of the components with analytical methods, such as decomposition analysis, grey theory, least-squares regression, and geometric average method, are not easy because the database is limited and because these subsystems are interlinked and dependent on each other. System dynamics (SD) provide a simulation platform to analyze a large-scale and complex socioeconomic system with multiple variables that change over time. With the aid of an SD model, we selected Kaohsiung as a case study to explore the effects of variations in demographics, fuel prices, and economic growth rate, among other factors, on the number of vehicles, fuel consumption, and energy-related CO 2 emissions. In addition, we developed three scenarios based on the possible policies that could be adopted by the city government to simulate their potential both for reducing the vehicular fuel consumption and for mitigating CO 2 emissions in Kaohsiung. SD, which is based on systems theory, is a method for analyzing complex management problems with cause-effect relationships among different systems. Industrial Dynamics [19] was the first book to illustrate the influence of organizational structure, policies, and action delays on industrial activity. An urban dynamics model was then constructed to show the effects of the interactions among business, housing, and people on the growth pattern of a city. Finally, a large and complex socioeconomic simulation system, i.e., World Dynamics, was developed [20] . The world socioeconomic system might collapse if actions are not taken to slow population growth and the continuous and unrestrained exploitation of natural resources. In recent years, the SD model has been widely used to analyze agricultural systems [21, 22] , environmental management and planning [23, 24] , industrial sectors [25] [26] [27] [28] [29] [30] [31] , strategy planning and decision making [32] [33] [34] , transportation systems [35] [36] [37] [38] [39] 15, 40] , urban planning [41] [42] [43] [44] [45] , waste management [46] [47] [48] [49] [50] , and water resources and lake eutrophication [51] [52] [53] [54] . The transport mode for distributing goods in Germany was explored with the aid of an SD model [35] . In addition, policy interventions such as infrastructure investments, and carbon tax were simulated to examine their effects on energy savings, CO 2 reduction, public expenditure, and economic development. An SD model was used to evaluate the influence of the traditional supply chain and the vendor-managed inventory system on the performance of a firm's supply chain [36] ; to examine the effects of policy scenarios on traffic volume, modal share, energy conservation, and CO 2 mitigation [37] ; and to investigate how incorporated systems, such as population, economy, transportation demand, transportation supply, and the vehicular emission of nitrous oxides, affect the dynamic development of urban transportation systems under five policy interventions on vehicle ownership [38] . An SD model was developed to explore the interrelationships among population, economy, housing, transport, and urban land in Hong Kong; the long-term constraints of and potentials for urban development yielded by the study were offered as policy suggestions for city planning [39] . Previous relevant studies rarely considered interactions among various transportation subsystems simultaneously with CO 2 emission and energy consumption. Although certain developed countries, such as the United States, the United Kingdom, and members of the European Union, have focused on improving fuel efficiency using advanced technologies [55, 56] , few studies have developed a practical SD approach for urban planners to further assess the effect of urban transportation policies on energy consumption and CO 2 emission. This study examines three main urban transportation policies in our proposed model: fuel tax, motorcycle parking management, and free bus service. Prior studies mainly investigate the effect of a particular policy, such as fuel tax in Europe and the US [57] ; parking management policies in China [58] ; and free bus policy in Japan, Belgium, and England [59] [60] [61] . A limited number of studies analyze the effect of these policies on energy consumption and CO 2 emission reduction simultaneously and compare the respective policy with the synthetic policy to compare the policy effectiveness. Our study aims to fill this research gap by developing a systematic and simplified analytical tool that can help urban planners to evaluate the influence of various transportation policies on energy consumption and CO 2 emission reduction. Briefly, an SD model describes the information, structural boundaries, strategies, and action delay inside the system structure through a feedback process. A quantitative simulation is performed to study the dynamic behavior of the interaction of interrelated components inside the system structure. The SD model analyzes a complex system with multiple variables that change over time and determines how the system is affected by the implementation of specific policies [62] . In addition, Kummerow [63] revealed that the SD model not only relatively easily incorporates qualitative mental and written information as well as quantitative data but also can be used when the database is insufficient to support statistical forecasting analysis. Thus, the SD model is an appropriate approach to display the inherent behavior and influences inside the system structure despite multidirectional dynamic interactions and the fact that life is infinitely more complicated and difficult than we can effectively simulate [21, 27, 37, 38, 53, 39] . Although an SD model is an appropriate approach to simulate a complex and multidirectional dynamic system by constructing mathematic functions, it is a subjective and time-consuming operation. The causal relationships of the SD model are based on the subjective judgment of the operator, reference suggestions, data availability, and information acquisition. Thus, the simulation result will change if the operator adopts different stock and flow variables. In addition, error analysis based on historical statistical data should be evaluated to ensure that the forecasted results are accurate and efficient and that the causal relationships used are reasonable. An SD model contains two parts. The first part is a causal-loop diagram that describes an idea, both conceptually and as a set of simplified cause-effect relationships between the different systems developed during model construction. The second part is a stock-flow diagram that represents the quantitative relationships among variables. A more detailed description follows. The relationships of real urban transportation systems are not likely to be simple, but the SD model offers an opportunity to show how interrelated variables in a system affect one another by arrows. A plus or minus sign indicates the direction of the variations between two variables: the ''+'' sign indicates that a change in one variable causes another variable to change in the same direction, and the ''À'' sign indicates that one variable causes another to change in the opposite direction. Fig. 1 shows the causal-loop of the SD model for an urban transportation system (more explanation is described in Section 5). A stock-flow diagram has four components: stock, flow, auxiliary variables, and arrows (Appendix A). The stock variables are represented by labeled rectangles, e.g., ''individual disposable income'' and ''urban population.'' Each stock variable accumulates all the values that flow into and out of it (indicated by the thick heavy arrows pointing from and to the stock variables, such as ''increases in individual disposable income'') and reflects the condition within a system at a specific point in time. Stock variables can be changed only through flows. Thus, the value of the stock variable is controlled by the pipes (the thick heavy arrows with a valve in the center and a cloud symbol at the end) pointing into or out of the stock variable. A flow variable refers to the rate of changes over a certain interval of time. An auxiliary variable is an intermediate variable used to show the informational transformation process, the environmental parameter values, or the systematic test functions or values. The causal relationship between variables is depicted by the curved blue arrows. The city of Kaohsiung in southwestern Taiwan comprises an area of 15,360 ha (59.3 square miles or 153.6 square kilometers). Kaohsiung City is the second largest metropolis in Taiwan and offers air, land, rail, and sea transportation. Air and sea transport traffic determines the industrial structure share and the scope of city development. The Kaohsiung Harbor is an important transport point for the Taiwan Straits and the Bashi Channel. The Kaohsiung International Airport has 12 airlines flying worldwide through 40 air routes. Kaohsiung is not only important for the import and export businesses in Taiwan but also Taiwan's industrial center because of the predominance of the international harbor and airport. Heavy industries, such as steel-making, refining, shipbuilding, and those involved in the manufacture of petrochemicals and cement, as well as two export-processing zones in Kaohsiung and neighboring NanTse have significantly accelerated the diversity of local industrial activities and turned Kaohsiung into the most important industrial and commercial center in southern Taiwan. The population of Kaohsiung rose from 1.39 million in 1990 to 1.56 million in 2013. With the urbanization and internationalization of Kaohsiung, individual disposable income has also increased: in 2013, it was 29.6% higher than in 2003, with an average annual growth rate of 2.63%. The number of motor vehicles in the city grew at an annual rate of 1.55% over the past 10 years, reaching 1.57 million in 2013. Among the 1.57 million vehicles, 25.70% and 70.91% represent the number of private cars and the number of motorcycles, respectively. The percentages of light trucks, heavy trucks, and city buses were 2.66%, 0.71%, and 0.03%, respectively. Vehicle ownership rates for private cars and motorcycles were 259 and 715 vehicles for every 1000 people. In this study, the SD model includes seven subsystems: urban population, individual disposable income, private cars, motorcycles, light trucks, heavy trucks, and city buses (Appendix A). The size of the human population is the foundation of a city's development, and issues such as the growth rate in the number of motor vehicles, vehicular energy consumption, and CO 2 emissions are derivatives of the interaction between human population and economic activities. Based on this assumption, the subsystem of the transportation mode and the related variables [i.e., vehicle kilometers of travel (VKT), vehicular fuel efficiency, transfer ratio among modes, emission coefficient, and other factors] were added to the model after the dynamic behavior of urban population and individual disposable income had been determined. Furthermore, using commercial simulation software (Vensim 5; Ventana Systems, Inc., Harvard, MA), the causal relationships between the various components within the system were simulated from 1995 to 2025. Vensim is herein used to develop, analyze, and package highquality dynamic feedback models. Models are constructed graphically or in a text editor. Features include dynamic functions, subscripting (arrays), Monte Carlo sensitivity analysis, optimization, data handling, application interfaces, and more options. Vensim is an interactive software environment that allows the development, exploration, analysis, and optimization of simulation models [64] . Fig. 1 shows the causal-loop of the SD model for the urban transportation system. Economic growth increases the number of motor vehicles and attracts more migrants from other cities. The amount of energy requirement and CO 2 emission will rise as the number of motor vehicles increases. However, the increase in the amount of CO 2 will reduce the growth rate of the urban population. Simultaneously, the number of motor vehicles will decrease with the reduction of urban population. Economic development affects population, wherein economic growth leads to an increased number of vehicles [15] . Therefore, we assume that economic growth positively affects the number of motor vehicles and population growth. Moreover, fuel use can positively influence CO 2 emission [15] . We thus assume that energy consumption can positively affect environmental issues (CO 2 emission). The number of private vehicles and buses positively affects traffic congestion and energy consumption [17] . We assume that the number of city buses and the number of motor vehicles both positively affect traffic density and energy consumption. In addition, the tax policy on fuel can reduce the fuel consumption of motor vehicles, leading to reduced CO 2 emissions [65] . Traffic density is also a significant and robust predictor of habitant survival, more so than ambient air quality [66] . We assume the association between traffic density and CO 2 emission as well as the existence of the negative effect of traffic density on population growth. In Fig. 1 , the p values of each variable are less than 0.05, which indicates statistical significance. In Kaohsiung, one out of two residents own motorcycles, whereas one out of three residents own cars. These residents are accustomed to the convenience, independence, and flexibility that are provided by private vehicles. Energy consumption and CO 2 emission issues are mainly derived from private vehicles. Therefore, we mainly focused on the influence of private vehicles in our case. Less than 1% of the population uses the taxi service because of the higher charge compared with other forms of transportation [67] . Moreover, the Uber platform service is not currently allowed to be officially operated by the authority, and this service is thus less popular in Kaohsiung. Road motor vehicles account for the relative large CO 2 emission and energy consumption (95.75%) compared with the metro system that, having its own electrification system, supplies electric power for movement without a local fuel supply [68] . Therefore, the other possible variables, including vehicle technology, emission legislation, and automobile park age, are not currently emergent in this stage for Kaohsiung and can be further examined in future studies. Contributions of our study include the use of a systematic approach to examine energy and CO 2 emission reduction by implementing various transport policies in the urban transportation context. Adopting this proposed approach is useful in other cities, and their specific features should be considered to develop an appropriate model to understand the various effects of policies. We considered two equations to explain the effects of individual disposable income and an economically active population on the number of motorcycles in the main text. Linear least-squares regression analysis was performed to reflect the effects of individual disposable income and an economically active population on the variation of private cars and The SD model is an approach to understand the behavior of complex systems over time. SD can estimate fuel price while considering the time effect. Therefore, we used DELAY1I to consider this time effect. This delay function can be used in the equation as with normal SD modeling. This function is frequently used in SD for modeling postponed effects. We adopted the delay function to model the effect of fuel price on the relationship between the decrease in private car use and the increase in fuel price. The postponed effects of fuel price are considered in our model by using the delay function. The decrease in private car use is associated with the increased fuel price that can be estimated by the following formula: Decrease in private car use ¼ private cars  ð33:8%=13:45% à ðFuel price À DELAY1I ðFuel price;1; 1ÞÞ=DELAY1I ðFuel price;1; 1ÞÞ; where 33.8% is the probability that the number of private cars that will decrease when fuel prices increase by 13.45% The size of the human population not only reflects the scale of urban development but also drives the transport demand. The size of the urban population was selected as a stock variable, and natural changes in the population and changes caused by social migration were selected as flow variables because the size of the human population is affected by both natural changes in the population and changes caused by social migration. The formulation of the natural changes in the population is expressed as the product of the human population and the natural population ratio per year, where the natural population ratio is adopted from the Statistical Yearbook of Kaohsiung City [69] . In addition, individual disposable income, traffic density, and aggregate CO 2 emissions were considered in this study to control the variations in social migration. Economic performance is an important index for evaluating the competitiveness of a city. If individual disposable income grows, then the number of migrants and the number of motor vehicles increase; otherwise, these values decrease. Therefore, individual disposable income was chosen as a stock variable dependent on the growth ratio of the GDP [70] . Furthermore, the prediction of the Global Insight database [71] on the future GDP growth ratio of Taiwan was reduced by 0.5% to avoid an overestimate. Several studies have indicated that the number of motor vehicles, as well as the number of new vehicles purchased, is closely associated with economic growth and population [72] [73] [74] [75] [76] . We analyzed this subsystem to evaluate the effect of the changes in the level of the individual disposable income and in the size of the economically active population on the variation of the number of private cars. According to the 2008 survey of MOTC, the number of private cars driven decreased by 33.8% after fuel prices increased by 13.45%. Thus, the effect of fuel prices on the variations of automobile use was also considered because of the rise in the price of crude oil, which has almost doubled since the beginning of 2007. The auxiliary variable energy consumption by private cars was calculated by multiplying the number of private cars, VKT, and the inverse of the average vehicular fuel efficiency (km/l), where the value of VKT and fuel efficiency were obtained from the Taiwan Emissions Database System (TEDS) 8.1. Estimations of energy-related CO 2 emissions were determined by the product of vehicular energy consumption and its emissions coefficient, published by the Intergovernmental Panel on Climate Change (IPCC). The reason for the omission of electric vehicles in the model is that the electric vehicle technology in Taiwan is currently in the early stages. Moreover, the central and city government did not provide strong incentives, including direct subsidies, fiscal reduction, and regulatory policy, to increase the use of electric vehicles. In terms of user perspectives, short driving range and slow speed of electric vehicles lead to the less popularity of electric vehicles in Taiwan. Even when considering improvements in the fuel economy of vehicles, we observe that the dense traffic in Kaohsiung requires the car to stop and go frequently. Thus, fuel economy improvements of vehicles are not significant in the local context. Therefore, the policy imposing fuel tax seems to remain useful in Kaohsiung. The Kaohsiung Mass Rapid Transit (KMRT) system opened for service in 2008. The system not only provided a new lifestyle for citizens but also reduced the number of private vehicles used for commuting to work. To reflect the influence of the KMRT on vehicular fuel consumption, the transit system was also incorporated into this subsystem. Specifically, the decrease in the number of commuters who switched from using private cars to using the KMRT was estimated by combining the average number of passengers carried by the KMRT, the average number of kilometers per passenger trip, the average number of occupants per automobile, and the transfer ratio. Thus, the effect on vehicular energy consumption was calculated by multiplying the decrease in private car use and the average vehicular fuel efficiency of private cars. Motorcycles accounted for 70.91% of the 1.57 million registered vehicles in Kaohsiung in 2013; motorcycles provide greater mobility and are less expensive than other types of motor vehicles. In this study, the increase in the number of motorcycles was primarily driven by individual disposable income and the size of the economically active population [76] [77] [78] [79] . As mentioned previously, fluctuations in fuel prices affect the number of private vehicles driven and the distances that they are driven. Hence, the effect of fuel prices on mode choice and mode transfer was further incorporated into the model through the operation of flow variables: mode transfer from private cars and the decrease in motorcycle use by fuel price increase (Appendix A). In addition, the formulation of vehicular fuel consumption and associated CO 2 emissions derived from the number of motorcycles was the same as that from the number of private cars, but the average occupancy rate (the number of passengers per motorcycle) and the transfer ratio between the KMRT and motorcycles were different. As the world's thirteenth (2013) largest international port and the largest industrial center in Taiwan, the city of Kaohsiung is important both for freight transportation and for industrial and commercial activities. Many cargos and freight need to be transported to the northern metropolitan areas because Kaohsiung is located in the south of Taiwan and is a harbor city. After exiting the harbor, some heavy trucks need to pass the city area to the highway system. This phenomenon is reason for the consideration of heavy-duty trucks in our model. The dynamic behavior of this subsystem is analyzed through the operation of a stock variable: the number of heavy trucks; a flow variable: the increase in the number of heavy trucks; and two auxiliary variables: the effect of GDP on heavy truck function and the growth ratio of GDP. The growth of freight transport demand is primarily a consequence of the growth of economic activity [80] [81] [82] [83] 5] . Hence, the growth of GDP was selected as the motivational factor in this study to reflect the variation in the number of heavy trucks. The auxiliary variable effect of GDP on the heavy truck function was constructed based on the concept of a table function, which is a graphical tool that captures the causal and non-linear relationship between two variables. Business activities and commercial services, such as food markets, street vendors, bazaars, superstores, cargo carriers, and other such entities, are closely linked with the number of light trucks. Thus, GDP growth rate was selected as an auxiliary variable to reflect the effect of economic development. In this subsystem, the number of light trucks was defined as a stock variable, and the change in the number of light trucks was defined as a flow variable inspired by the growth of the GDP by constructing a table function. The formula used to calculate the aggregate energy demand of light trucks was the same as the one used for heavy trucks. Despite the 7.2% modal share of the city buses, they were incorporated into the model to reflect a complete picture of the transportation system in Kaohsiung City. In this subsystem, the number of city buses was selected as a stock variable, and its value is influenced by the flow variable the annual change in the number of city buses. The historical value of auxiliary variables and the government-set target determined the number of city buses through the feedback loops. To improve the quality of the city bus service, the city bus operation agencies added 156 buses since 2008 by adjusting the frequency and routes of city buses, releasing 30 government-run routes to private enterprises, enhancing realtime bus information, and upgrading the service quality. 6. Discussion of analytical results To detect the effectiveness of the proposed model, the simulation results were validated by comparing the estimated values with their historical trends [21, 27, 37, 38, 53, 39] . The examined variables included urban population, individual disposable income, motorcycles, private cars, light trucks, and heavy trucks (Tables 1 and 2 ). The model developed in this study appears to be reasonable because the relative errors were all less than 10% [38] . The behavior analyzed using the reference model was simulated from 1995 to 2025 based on existing socioeconomic conditions and policies. The decline in the natural population of Taiwan for the past 19 years has lowered the growth rate of the urban population. A decreasing natural population is both the current and the future trend in most developed countries. Our simulation predicted that in 2025, the population of Kaohsiung would gradually decline to 1.44 million, 54,128 fewer than today's population (Table 3 ). Global Insight projected that the annual economic growth rate of Taiwan in 2015 would be 0.07% higher than it is in 2014. However, in the next 11 years, the growth rate of the GDP of Taiwan is expected to be lower than those during the past two decades. Given this slowdown in economic activity, individual disposable income will grow at only a moderate rate. For example, our simulation predicts that the annual growth rate of individual disposable income from 2014 to 2025 will be 1.95%, which is lower than previous rates. Our simulation also predicts that this income will reach NT$ 539,977 in 2025 (1 US dollar = 30 NT$). The simulation indicated that the number of motorcycles will increase by 68,659 vehicles between 2014 and 2025, which is a growth rate of 0.57% per year. This growth in the number of motorcycles and private cars is attributed to the size of the economically active population, the level of individual disposable income, and the variation of fuel prices. Similarly, the simulation estimated that the number of private cars in 2025 will be 26,570, a decrease of 9.23% over 11 years. Economic weaknesses will also cause a slow growth rate (1.57%) in the number of heavy trucks until 2025, when 17,427 of such vehicles will show an increase of 18.65% compared with the number in 2014. The effect of a lower GDP growth rate on the number of light trucks will be limited because they are used for daily commodity exchanges and business transactions. The simulation showed that the number of light trucks will have grown an average of 4.15% per year and will have reached 77,198 vehicles in 2025. After 2014, the aggregate energy consumed by motor vehicles will increase by 1.15% until 2025. The aggregate increase in CO 2 emissions will be nearly 354,041 metric tons between 2014 and 2025, which is 14.59% higher than the emission level in 2014. Most of our simulated results have an estimation error lower than 5% except for rare cases. Therefore, the prediction capability of our model is acceptable [38] . The reason for the main percentage errors concentrated between 1999 and 2003 may be the Severe Acute Respiratory Syndrome (SARS) outbreak in Taiwan between 2002 and 2003. SARS caused widespread social disruption and economic losses, and its economic effect has been considerable in Taiwan. Moreover, after Taiwan's first experience of party alternation in 2000, the government system experienced instability in the early stages that led to the negative effect on economic propensity and motor-vehicle growth. These major unusual events caused the disturbance in our model predictions during this period. We failed to fully understand the real effect of CO 2 emission and energy use reduction under various transportation policies because the data were limited. To demonstrate the accuracy of our proposed model, a comparison is performed between real data from 2013 to 2014 and the estimated number of motor vehicles in the reference model during the same period, after a free bus policy was implemented in 2013. The deviation between simulated data and real data is within 5%, which is reasonable [38] . The reason for the reduced population in 2007 can be that the increasing labor costs have encouraged numerous manufacturers to leave Kaohsiung, which has reduced the number of residents in the city. Among all strategies for sustainable transport policy, the implementation of programs including those that encourage the use of the public transportation system using benefits, such as subsidies, free transfers, or transfer discounts, and deterrents (e.g., restraining the use of private vehicles by parking management and levying taxes on fuel oil), is mostly discussed and encouraged in Taiwan. Furthermore, the Taiwanese government is considering an additional NT$ 2.5 per liter tax on fuel prices to reflect social justice and the user-pays principle and to restrain the use of private vehicles. Thus, based on the various assumptions and the past trends of the variables in the reference model, the policies including fuel tax, motorcycle parking management, free bus service, and synthetic policy are discussed in this study to explore their energy-saving and CO 2 -emission-reducing potential (see Tables 4 and 5) . We analyzed three scenarios considering including low, medium, and high oil price in our revised paper (see Tables 6-9 ). We used the average oil price to represent the medium price; high oil price can be estimated by the average oil price plus one standard deviation of oil price. Lastly, low oil price can be estimated by the average oil price minus one standard deviation of oil price. This study examines the appropriate urban transportation policies that mitigate a global warming effect mainly from CO 2 emission. Nitrous oxides (NOx), hydrocarbons (HC), CO, and soot emissions affect the health of urban populations. However, due to data limitation, we assume that the relationship between CO 2 emission and NOx, HC, CO, and soot emissions is of proportional equivalence. The estimated NOx, HC, CO, and soot emissions are included in. The detailed study regarding the precise toxicity of the emissions in the model can be further examined in future We simulated the scenario of a fuel tax because the increase in oil price not only influences the transportation mode choice but also reduces the amount of vehicular energy consumption. Therefore, oil price is a relatively direct and efficient incentive for inducing consumers to reduce private vehicle use, which lowers fuel consumption and CO 2 emissions. Currently, a fixed fuel tax is levied per year according to the engine capacity of vehicles in Taiwan. An additional NT$ 1 per liter of fuel tax will also be considered in the next 10 years. In our simulation, once the tax was included in the prices of gasoline and oil and levied according to the amount of fuel used, the numbers of motorcycles and automobiles used in Kaohsiung were both predicted to decrease. Overall, the number of vehicles in 2025 is estimated to be 1.3 million (Fig. 2) , which is 13.2% lower than the base. This reduction in the number of motor vehicles caused by the increase in fuel prices will lead to changes in the modal shares of the means of transport. Under this policy, the projected growth of vehicular energy consumption varies from 991,822 kl to 992,053 kl from 2008 to 2025. During the same period, motor-vehicle CO 2 emissions are expected to increase by 267,664 metric tons. Compared with the reference model, the energy requirements and CO 2 emissions in 2025 are predicted to be 11.0% and 9.9% lower, respectively. The increase in the price of crude oil will not only reduce fuel consumption but will also force a transformation in traffic modes. As seen in the reference model, the number of motorcycles in Kaohsiung increased because of the prior rise in fuel prices. Since 2008, the Kaohsiung City government has planned a system of six regional transit centers, which are areas composed of two major and four subsidiary transit stations that link the KMRT and the shuttle bus terminals in Kaohsiung into a 30-min access metropolitan circle. Under these measures, the number of passengers carried by mass transit increased by about 60 million. In 2005, the Taipei City government introduced a successful parking management program that prohibits motorcycles from being parked on sidewalks and in building arcades, requires payment for roadside motorcycle parking, and offers a parking-information inquiry system. The ownership of motorcycles in Kaohsiung City is 71.5%, the highest in Taiwan. From the success of this policy, Kaohsiung introduced a similar system to other popular centers such as night markets, train stations, and department stores since April 2012 to reduce motorcycle use. In this study, the rate of shifting from motorcycles to the KMRT, city buses, and bicycles was based on a survey made by the Taipei parking management office, because the motorcycle parking management system in Kaohsiung is still under implementation (Fig. 3) . The number of motor vehicles driven in Kaohsiung will sharply decrease when the city introduces and enforces its parking management policy. Our simulation estimated that the number of motor vehicles will decline to 1.17 million at the end of 2025, which is 21.7% fewer than in the reference model. Concurrently, fuel consumption and CO 2 emissions will be 6.0% and 5.3% lower than in the base model. In this scenario, the simulation assumed a 40% increase in bus ridership if both free bus service and discounted tickets for KMRT transfers from the subway to the bus were extensively implemented to the other weekdays. The simulation outcome indicates that the number of vehicles in Kaohsiung City will decrease by 2.3% (Fig. 4) . In 2025, the number of motor vehicles will reach 1.46 million, whereas the vehicular fuel requirement will decrease by only 0.8%. By 2025, the need for vehicular fuel will increase by 194,902 kl. The change in energy consumption will also imply the estimated increase of CO 2 emissions to be 2.76 million metric tons in 2025, which is 0.52 million more than in 2008. To evaluate the maximum potential for vehicular fuel and CO 2 reduction, the interventions act together as a package of measure, which is also considered in this study. According to the simulation, the results indicate that the number of vehicles in Kaohsiung City will decrease to 0.95 million vehicles in 2025, which is 36.6% lower than the base model (Fig. 5 ). In terms of the variation in vehicular fuel requirement, the variable will increase slightly from 908,169 kl to 916,558 kl from 2008 to 2025. Compared with the value at the end of 2025, the value is 17.8% lower than that in the reference model. The growth patterns of CO 2 emission and energy demand are similar because the variation of CO 2 levels is directly related to energy consumption. Thus, the contribution of aggregate CO 2 emission will be 2.34 million metric tons until 2025, which is lower by 0.44 million metric tons than the emission amount in the base model. From the observations of the forecasted patterns, the aggregate CO 2 emission needs to be reduced by about 0.15 million metric tons compared with the emission level in 2000. This result implies the difficulties and urgency of CO 2 mitigation in Kaohsiung City, despite the synthetic policy being considered in the SD model. Furthermore, we simulated the scenario of a fuel tax because the increase in oil prices not only influences the mode choice but also reduces the amount of vehicular energy consumption. Therefore, oil price is a relatively direct and efficient incentive for inducing consumers to reduce private vehicle use, which lowers fuel consumption and CO 2 emissions. Despite the limited effect of the separate policies of motorcycle parking management and free bus service on reducing vehicular fuel consumption, the government was able to reduce the number of private vehicles in use and promote the use of the public-transit system. Thus, we suggest that all three policies can be implemented simultaneously to restrain the growth of the number of private vehicles, motor-vehicle fuel consumption, and CO 2 emissions in Kaohsiung. With regard the effect of various policies, the number of motor vehicles, CO 2 emission, and emergency consumption significantly decreased between 2007 and 2009, and the reason can probably be the global financial crisis during this period because this negative influence caused the slowdown of economic development (see Figs. 6-8). The SD model is not only able to analyze a system with many interrelated variables but is also able to describe its dynamic trends based on a limited information set. By using a simplified SD model, which we constructed to analyze issues of urban population, disposable income, number of motor vehicles, vehicular energy consumption, and CO 2 emissions, we conclude that the fuel tax policy is the most effective method to reduce vehicular fuel consumption and CO 2 emissions. This policy is even more effective than the motorcycle parking management and free bus service policies. According to the investigation of the MOTC of Taiwan, the fluctuations in fuel prices affect the number of private vehicles driven and the distances they are driven. For instance, the use of private cars and motorcycles decreased by 33.8% and 10.2%, but the rate of transfer from driving private cars to driving motorcycles was 28.36% when the average price of gasoline increased by 13.45%. The simulation of a fuel tax also suggests that the increase in fuel prices will lead to changes in the modal shares of the means of transport. The number of motor vehicles in Kaohsiung will decline by 13.2% in 2025, with a 0.5% decrease in the actual number of registered motor vehicles in the city between 2008 and 2025. The fuel tax will also cause a considerable reduction in the growth rates of vehicular fuel use and CO 2 emissions. The motorcycle parking management policy will also cause a 21.7% decrease in the number of motor vehicles by 2025, as well as 6.0% and 5.3% reductions in fuel demand and CO 2 emissions, respectively. An extensively implemented free bus service will reduce the number of motor vehicles and the fuel requirement by only 2.3% and 0.8%, respectively. Furthermore, the maximum potential of vehicular fuel consumption and CO 2 reduction is suggested in the scenario of all the interventions acting together as a package of measure. In 2025, the aggregate vehicular energy requirement and CO 2 emission will reach 916,558 kl and 2.34 million metric tons, respectively, which suggests a 17.8% and a 16.0% decrease in energy requirement and CO 2 emission compared with the reference model. Simulation results indicate that both the fuel tax and motorcycle parking management policies are suggested as potentially the most effective methods for restraining the growth of the number of private vehicles, the amount of fuel consumption, and CO 2 emissions. We conducted a synthetic policy consisting of all policies which outperforms the three individual policies. Compared with other countries, Taiwan is densely populated (its average population density is 646 persons/sq. km. of 2014) and has limited energy resources. In terms of energy consumption, the Taiwanese economy is sensitive to oil price variations because the country lacks conventional energy resources and is highly dependent on energy imports (nearly 99% of total energy consumption). Similar to the case of South Korea, road transportation in Taiwan accounts for more than 80% of CO 2 emission of the transport sector [84] . Taiwan is not yet a member of the United Nations Framework Convention on Climate Change. The country's CO 2 emission increased significantly over the past two decades, making Taiwan the 23rd largest CO 2 emitter in the world [6] . Taiwan's transportation sector accounted for 15% of the country's CO 2 emission in 2012. Taiwan, which is newly transformed from a developing country to a developed country [85] , pursues economic development even with limited energy resources. Therefore, finding a compromise between economic development and energy consumption as well as CO 2 emission is a critical issue for Taiwan. Many transferable lessons can be learned from Taiwan's experience and can be a useful reference for countries with analogous characteristics such as economic development pattern, high population density, and high energy dependence. With respect to generalizability of the proposed model, this study proposes policies to restrain the use of private vehicles, for example, by increasing fuel tax and launching a strict motorcycle parking management strategy. This study also examines the policy of providing free bus service from the perspective of increasing public transportation service supply and enhancing service quality to decrease urban transportation energy consumption and CO 2 emission. In this study, we present the example of Kaohsiung, a city that is highly dependent on using private vehicles (i.e., every two residents have one motorcycle, and every three residents have one private car). The lessons from Kaohsiung are applicable to other cities with similar population density, urban environment, and economic development pattern, especially Asian cities, such as Bangkok, Kuala Lumpur, and Ho Chi Minh, which are characterized by high popularity of motorcycles and limited public transportation services. The proposed SD model examines the factors, including the influence of GDP evolution, population growth, and individual disposable income, on urban transportation energy consumption and CO 2 emission of various urban transportation systems simultaneously. The model also considers the interactions among these factors over time to assess the effectiveness of various urban transportation policies. Cities can modify our proposed approach according to their specific urban environment, economic development pattern, and public transportation service level to derive an appropriate model to understand the influence of urban transportation policies on energy consumption and CO 2 emission. The SD model can also be applied to other programs such as urban planning, low emission vehicles, speed limits, high occupancy vehicle control lanes, strengthening energy conservation standards for new vehicles, and other aspects of transportation are certainly considerable. They provide a helpful reference for city governments in urban development planning and setting policies associated with transport-related energy policies. The cost of implementing a free bus policy needs a certain amount to subsidize the ticket price of passengers. In 2013, the central government provided 1.67 million US dollars to Kaohsiung to implement a free bus policy for two months. The motorcycle parking management and fuel tax policies need the extra administration and resources to pay the costs. Compared with the latter policies, implementing a free bus policy seems to be a more costly policy. Among the three proposed policies, the fuel tax policy seems to be the most cost effective. The information with respect the cost of implementing different policy measures is useful for the urban planner and the decision maker. However, due to the data limitation, the precise cost-benefit analysis of various scenarios can be implemented in the future studies. system in Kaohsiung city Classification Notation/Data source
Author Contribution: JMJ, MSR, and MK conceived the study. MSR, JMJ, and BS designed the study and performed data collection. MSR oversaw data collection and performed data analysis. CS and WK provided statistical advice on analyzed data and presentation of data. CH, TJB, and MF provided advice and expertise on content. JMJ drafted the manuscript, and all authors contributed significantly to its revision. JMJ takes complete responsibility for the submitted paper. To the Editor: While healthcare workers (HCWs) have been recognized as a high-risk group for contracting COVID-19, 1 there are no studies, to our knowledge, that report the rate of COVID-19 seroconversion in emergency professionals. Between February 1, 2020 and April 30, 2020, over 1,000 patients diagnosed with COVID-19 presented to our ED in Brooklyn, New York. As these patients were arriving in overwhelming numbers, there was much uncertainty and concern regarding the risk of infection for emergency professionals. Here we report the rate of COVID-19 seroconversion for emergency professionals at our urban academic emergency department (ED) following this surge, and describe characteristics associated with seroconversion. In order to better understand the effects of COVID-19 on our ED, we conducted a retrospective review of a quality improvement (QI) database consisting of SARS-CoV-2 IgG antibody test results (Abbott Laboratories, Abbott Park, IL), as well as self-reported demographic, symptomatologic, and occupational characteristics for emergency professionals who were actively working in the adult ED from February through April, 2020. There were 65 emergency professionals who were eligible to be entered into this QI database. A total of 50 (77%) professionals volunteered to receive antibody testing and were included into our study (median We found the overall rate of seroconversion in our emergency professionals to be 46%. Rates for attending physicians, EM residents, and PAs, were 64%, 36%, and 29%, respectively. Published rates of infection for HCWs are limited, however, a study from the Netherlands reported the prevalence of COVID-19 in all HCWs to be much lower at 6%. 2 Recent antibody testing within New York City (NYC) has estimated the community seroprevalence of COVID-19 to be lower than our findings at 19.9%, 3 further highlighting emergency professionals as a highrisk group. We also analyzed whether factors such as intubation, hours worked, and symptomatology were associated with COVID-19 seroconversion. Intubation of COVID-19 patients was performed by 65% of seropositive and 59% of seronegative professionals. These findings were not strongly associated with COVID-19 seroconversion (risk ratio [ While our experience is limited to a single ED in NYC, these findings may provide insight into COVID-19 seroconversion among other emergency professionals. Further research is needed to determine the true risk of infection in this group.
Sialic acids are compounds derived from neuraminic acid which belong to a large family of complex nine-carbon sugars usually bound to other carbohydrates through α-ketosidic bonds. In mammals, sialic acids are found at the non-reducing end of glycoconjugates [1] . Influenza viruses are the oldest and most important examples of viruses that recognize sialic acids as the surface receptor for entry into host cells and their binding and propagation through interaction with these receptors have been well documented [2] . Avian flu virus strains preferentially bind to sialic acids linked to galactose through an α2-3 bond, while human flu virus strains preferentially attach to sialic acids linked to galactose through an α2-6 bond [3, 4] . In contrast to other human viruses, the Influenza A(H1N1)pdm09 virus showed strong tropism for the two types of receptors during the 2009 pandemic [5, 6] . This feature may explain the pandemic potential acquired by this virus, since it permitted the virus of swine origin to bind to Siaα2-6Gal (NAC) receptors of the upper respiratory tract, facilitating interpersonal transmission. On the other hand, maintenance of the capacity to bind to Siaα2-3Galβ1-permitted the virus to replicate in the lower respiratory tract, a fact explaining more severe cases of influenza such as severe viral pneumonias observed even in young adults without comorbidities [3, 4, 6] . Different host genetic variants may be related to the virulence and transmissibility of pandemic Influenza A(H1N1)pdm09, influencing events such as binding of the virus to the entry receptor on the cell of infected individuals and the host immune response [7] . The ST3GAL1 gene (ST3 beta-galactosidase alpha-2,3-sialyltransferase 1) is located on the long arm of chromosome 8 (8q24. 22 ) and encodes the Siaα2-3Galβ1-receptor. Different polymorphisms have been described in this gene. Three SNPs (rs939024, rs2978041 and rs2945733) have so far been identified in codifying regions related to bipolar disorders, but not to infectious diseases in humans [8, 9, 10] . ST3GAL1 gene variants may be related to a higher or lower expression of the receptor on the surface of pneumocytes and thus interfere with the capacity of infection of the Influenza A (H1N1)pdm09 virus in cells of the lower respiratory tract [5, 6] , contributing to complications of this disease. Therefore, the present study investigated genetic variants of the ST3GAL1 gene and correlated the finding with the progression of Influenza A(H1N1)pdm09 infection in a Brazilian population. The demographic and clinical features of the participants are shown in Table 1 , in which the 356 patients were divided into three groups according to severity: one of group of patients with classical symptoms who did not require hospitalization (n = 157), one group with severe acute respiratory syndrome (SARS) requiring hospitalization and survived to infection (n = 123), and a group of patients who was hospitalized but died due to infection (n = 76). There was a predominance of women in all groups (58%, 62.6% and 69.7% of non-hospitalized patients, hospitalized patients and patients who died, respectively). Patients who died were older when compared to the other two groups (p < 0,001). Among the comorbidities observed, metabolic disorders (p < 0.001), immunosuppression (p < 0.001) and obesity (p = 0.001) were associated with more severe disease, as was an abnormal chest X-ray (p < 0.001). The frequency of pregnancy, smoking, obesity, lung disease, heart disease, nephropathy or hemoglobinopathies did not differ significantly between groups. However, the absence of comorbidities was a protective factor in the sample (p = 0.049); in this respect, 72% of the subjects who did not require hospitalization had no associated diseases. The mean genetic contributions of the parental groups forming the population studied are shown in Table 2 . Significant differences in European and African genetic contributions were observed between groups (p = 0.004 and p = 0.007, respectively). There was a higher European genetic contribution among non-hospitalized patients and a higher African genetic contribution among patients who died. The allele and genotype frequencies of the ST3GAL1 gene polymorphisms did not deviate from Hardy-Weinberg equilibrium. The polymorphisms rs113350588 located in exon four and rs1048479 located in exon eight result in synonymous substitutions of an aspartate (D) at position 95 and of a serine (S) at position 273 of the protein, respectively. In silico functional analysis suggests that both variants may have putative direct and indirect effect on gene regulation (Table A in S1 File). Splicing analyses suggested that both SNPs may alter splicing of the transcript and consequently the isoforms of the protein. The substitution of guanine (G) for adenine (A) in rs113350588 promotes a change in an exonic splicing enhancer site (disrupting the sites for SF2/ASF proteins interactions), while the substitution of cytosine (C) for thymine (T) in rs1048479 activates a cryptic acceptor site, with the presence of one or more cryptic branch points. These polymorphisms were in linkage disequilibrium (D' = 0.65) in the population studied, resulting in four haplotype alleles that form nine observable genotypes (diplotypes) (Tables B and C in S1 File). Moreover, analyses conducted with HaploReg 3.0 showed that rs1048479 is in linkage disequilibrium with two other polymorphisms (rs2142306 and rs276865) in all populations deposited in 1000 Genomes pilot project. These SNPs are placed in non-coding putative regulatory region (3'UTR and intronic regions respectively). Functional characterization of these SNPs can be seen in Table A in S1 File. The results suggest that these polymorphisms are placed in regulatory regions (TF binding sites as well as in sites of histone enhancer sites marks in several tissues), and alleles present different affinity for protein-DNA interaction (i.e. alternate C allele in both polymorphisms reduce the predicted affinity with proteins RXRA, SETDB1, Znf143, Myb when compared to wild allele T). The profile expression of interactive proteins (RXRA, SETDB1, Znf143, Myb) was evaluated in The Human Protein Atlas [11] . All the proteins showed a high expression level on the respiratory tissue, except Myb transcription factor, which presents a medium expression level. Taken together, our results suggest that rs113350588 and rs1048479 may alter the function of ST3GAL1 either directly through splicing regulation alteration and/or indirectly through LD with SNP with regulatory function. There were no significant differences in the distribution of allele or genotype frequencies between patients (Table 3) . A higher frequency of the GC and AT haplotypes was observed in patients who died (13.2% and 22.4%, respectively) when compared to patients who were not hospitalized (7.0% and 6.4%) and hospitalized patients who survived (4.1% and 8.1%) ( Table 4 ). The influence of these haplotypes on the risk of more severe disease or death was evaluated using logistic regression models (Table 5 ). Patients carrying the GC haplotype did not exhibit a higher risk of more severe disease, but the risk of death due to infection with Influenza A(H1N1)pdm09 was increased in this group (OR = 4.159, 95% CI = 1.55;11.12). The risk On April 21 st , 2009 the Centers for Disease Control and Prevention (CDC) reported two cases of infection with a new influenza virus strain which had occurred in California, USA [12] . This strain rapidly spread around the world and gave origin to the first influenza pandemic in the 21 st century [13] . In this pandemic, the fact that the incidence of severe disease was higher among young adults than among individuals older than 50 years called attention [14] and differed from observations made during annual epidemics caused by other human viral subtypes [15] . In the sample studied, the mean age of hospitalized patients was 22 years and the mean age of patients who died was 30 years. In August 2009, among the cases of pandemic influenza notified in 122 cities of the United States, more than 85% of confirmed deaths due to the pandemic strain occurred in individuals younger than 60 years, with a mean age at death of 37 years. In contrast, in epidemics caused by seasonal strains 90% of deaths occur in individuals older than 65 years and the estimated mean age at death is 76 years. The mean age at death caused by the pandemic strain is also lower than that observed in the influenza epidemics that occurred in 1957 and 1968 [16] . In the present study, two polymorphisms of the ST3GAL1 gene, which encodes the Siaα2-3Galβ1-receptor, were investigated in patients with a diagnosis of influenza caused by the pandemic strain. Multivariate analysis demonstrated an association between the GC and AT haplotypes and severity or death due to the infection. Expression levels of sialyltransferase genes are known to differ according to tissue and type of cell, permitting regulation of the cellular pattern of sialylation and anticipating a complex specificity of these enzymes [17] . The enzyme encoded by ST3GAL4 gene which transfers the sialic acid chain to a galactose residue, forming beta-galactoside alpha-2,3-sialyltransferase 4 (Galβ1-4GlcNAc), serves as a cell entry receptor of influenza H5N1. The expression patterns of this enzyme differ between tissues of the respiratory tract and also show interpersonal variability, influencing differences in the rate of infection with this virus in a population [18] . Similarly, higher expression of the Siaα2-6Gal(NAC) receptor was observed in the lung tissue of a young patient without comorbidities who died, when compared to three other patients who died and had important risk factors for severe infection with Influenza A(H1N1)pdm09 [19] . Variability in the expression of the Siaα2-3Galβ1-receptor in tissues of the respiratory tract may also be related to variations in the manifestation of influenza caused by the 2009 pandemic strain. The rs113350588 and rs1048479 polymorphisms of the ST3GAL1 gene were predicted to play a role in the regulation and processing of transcription of this gene, influencing the availability of functional protein in the cell. The GC and AT haplotypes of the ST3GAL1 gene were more frequent in patients who died and determines a higher risk to this outcome. The presence of these haplotype variants should influence the expression or structure of Siaα2-3Galβ1-receptors in cells of the lower respiratory tract, facilitating entry of the virus into tissues and increasing viremia which, in turn, can lead to more severe presentation of the disease and can culminate in death. Functional studies would clarify the influence of these variants on enzyme expression and receptor formation. The present study demonstrated for the first time the association between ST3GAL1 gene haplotypes and the risk of more severe disease and death in patients infected with Influenza A (H1N1)pdm09. Studies of this gene in different world populations should help clarify the importance of these variants for the understanding of the role of host genetic variability in the clinical presentation and development of pandemic influenza. The study was divided into two phases: first, the presence of genetic variation in the ST3GAL1 gene was evaluated in a small sample; second, the polymorphisms found were genotyped in a larger sample. Collection of the material during the two phases was accompanied by filling out a notification form of the Brazilian National System of Medical Care (SINAM) which contained the clinical data of the patient. The study was approved by the Research Ethics Committee of the Center of Tropical Medicine, Federal University of Pará, and all patients who agreed to the blood collection signed a free informed consent form. Underage participants (younger than 18 years n = 133) had the informed consents signed by parents to participate in the study. All informed consent forms were filed at Federal University of Pará. In a preliminary study, 201 blood and nasal aspirate and/or nasopharyngeal swab samples were collected from subjects of both genders and all age groups who had a clinical suspicion of flu syndrome caused by strain A(H1N1)pdm09 and who sought healthcare services in the metropolitan region of Belém, Pará, Brazil. Diagnostic confirmation of the strain was obtained at the Laboratory of Respiratory Viruses, Virology Section of the Evandro Chagas Institute (SEVIR/ IEC), Ananindeua, Pará, using the SuperScript III TM One-step qRT-PCR System with Platinum Taq 1 (Invitrogen Life Technologies 1 ), according to the protocol recommended by the Centers for Disease Control and Prevention [20] . Genomic DNA was extracted from samples of 68 patients infected with Influenza A(H1N1)pdm09 virus using the QIAamp DNA Mini Kit (Qiagen 1 ) according to manufacturer instructions. The primers for amplification of the six codifying regions of the ST3GAL1 gene were designed using the Primer3 software [21] based on the reference sequence ENSG00000008513 (Table D in S1 File) [22] . After testing with the AutoDimer software, the primers were used in a polymerase chain reaction (PCR) to amplify each exon (numbered 4 to 9 according to the reference transcript ENST00000521180) in the 68 patients (Table D in S1 File) [22] . The amplicons were then sequenced using the Big Dye Terminator Kit (Applied Biosystem 1 ) according to manufacturer specifications. The PCR conditions are described in Tables E and F in S1 File. Once obtained, the sequences were aligned at a similarity of at least 70% with 10 times refinement using the Geneious 5.5.6 1 software to identify point mutations. The rs113350588 SNP in exon 4 and the rs1048479 SNP in exon 8 were detected in the sample studied, with minor allele frequencies of 50% and 40.4%, respectively (Table G in S1 File). In order to evaluate the putative effect of both variants in ST3GAL1 regulation, in sillico analyses using HaploReg 3.0 [23] and Human Splicing Finder 3.0 [24] were performed to analyze the putative role regulatory function and on splicing activity respectively. HaploReg is a tool for exploring annotations of the noncoding genome at variants on haplotype blocks, draw on comprehensive data from the Encyclopedia of DNA Elements (ENCODE). Using LD information from the 1000 Genomes Project, genetic variants can be visualized along with their predicted chromatin state, their sequence conservation across mammals, and their effect on regulatory motifs [23] . Human Splicing Finder 3.0 tool [24] , is a tool that helps studying the pre-mRNA splicing. It combines 12 different algorithms to identify and predict mutations' effect on splicing motifs including the acceptor and donor splice sites, the branch point and auxiliary sequences known to either enhance or repress splicing. These algorithms are based on either PWM matrices, Maximum Entropy principle or Motif Comparison method. In the second phase of the study, 356 patients were randomly selected among 1,524 cases of Influenza A(H1N1)pdm09 from the northern and northeastern regions of Brazil, confirmed at the Evandro Chagas Institute. Diagnostic confirmation and DNA extraction were done as described for the first phase of the study. Allelic discrimination of the polymorphisms was performed in all samples by real-time PCR using the C_2771724_10 assay (rs1048479) and a custom assay (rs113350588) of the TaqMan 1 system (Applied Biosystems 1 ) according to manufacturer instructions. The proportions of African, European and Native American genetic ancestry in the 356 patients included in the second phase of the study were estimated using a panel of 48 ancestry informative markers as described elsewhere [25] . Allele frequencies were estimated by direct counting. Hardy-Weinberg equilibrium was tested by chi-squared analysis. Haplotype frequencies and linkage disequilibrium were estimated with the Phase 2.1.1 software [26] . Differences in quantitative and qualitative characteristics between the groups of hospitalized patients, non-hospitalized patients and patients who died of the disease were verified by ANOVA, Fisher's exact test and the Kruskal-Wallis test. Fisher's exact test was also applied to analyze differences in allele frequencies of the haplotypes between the groups of patients. Logistic regression models were used to determine the association between ST3GAL1 gene haplotypes and the severity of infection, adjusting for the following variables: age, European and African genetic ancestry and presence of comorbidities. All analyses were performed with the SPSS 18.0 software and a level of significance of p < 0.05 was adopted. Supporting Information S1 File. Table A in S1 File. In silico functional analysis results for rs113350588 and rs1048479 and variants in Linkage disequilibrium. Ã Refers to LD between the rs2142306, rs2736865 and rs1048479 polymorphism for 1000 Genome project. Table B in S1 File. Frequency of the ST3GAL1 gene haplotypes observed in patients infected with Influenza A (H1N1)pdm09. Table C in S1 File. ST3GAL1 gene diplotypes frequencies observed in patients infected with Influenza A(H1N1)pdm09. Table D in S1 File. Sequence of the primers used for PCR amplification and nucleotide sequencing of the ST3GAL1 gene. Table E in S1 File. Protocol for PCR amplification. a Mixture of deoxyribonucleotide triphosphates: dATP, dCTP, dGTP, and dTTP. b A total of 35 cycles were performed for reactions with the exception of exon 8. c Anneling: 65°C 2 cycles; 64°C 10 cycles; 62°C 10 cycles, and 60°C 15 cycles.
T here is increasing recognition that not all severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) reverse transcription-PCR (RT-PCR) assays are created equal with respect to test performance (1) . A recently published study found that 7.1% of COVID-19 patients with initially negative RT-PCR results had SARS-CoV-2 detected upon repeat testing (2) . Infectious Disease Society of America (IDSA) guidelines advocate for repeating testing in patients with intermediate or high clinical suspicion of COVID-19 infection when the initial test is negative (https://www.idsociety.org/practice -guideline/covid-19-guideline-diagnostics/). Adoption of this strategy in the United States is challenging in the setting of widespread shortages of nasopharyngeal (NP) swabs and testing reagents. Our own institutions have experienced such shortages, and repeat testing generally requires approval. We performed a retrospective chart review to assess the impact of a restrictive approach to retesting. Between March and mid-April 2020, 1,128 patients underwent testing for SARS-CoV-2 at our institutions, of whom 232 were positive with a single NP swab specimen (20.6%). Of the 896 patients with negative results, only 33 underwent repeat SARS-CoV-2 testing (3.7%), of whom only one (3.0% [1/33]) was positive. This patient was retested on a subsequent hospital admission for new onset of respiratory symptoms. Among 22 patients who underwent repeat testing during the same hospital admission (22/33), the median time to retesting was 3.1 days. We acknowledge the possibility that others in our cohort may have been identified as positive had routine retesting been performed. IDSA guidelines suggest that testing of a second specimen is associated with a 17% increase in test sensitivity. Based on this, we estimate that routine repeat testing could have led to additional detections of 48 cases in our patient cohort. IDSA guidelines recommend testing of lower respiratory tract specimens if the initial NP specimen is negative in patients with an intermediate or high suspicion of infection. However, the majority of commercial assays for SARS-CoV-2 detection are not cleared for such specimens, and testing is available only in some reference laboratories. Only three patients in our cohort had alternate specimen types tested for SARS-CoV-2 (one oropharyngeal swab, one tracheal aspirate, and one bronchoalveolar lavage sample), and all three were negative. Supply chain issues have limited our capacity to test all patients whom we believe might actually have the infection despite an initially negative test result. IDSA recommends repeat testing in certain clinical situations, and as the breadth of COVID-19 clinical presentations expands (3), every effort should be made to enable widely available testing in health care facilities. This will help ensure that all truly infected patients are not left undiagnosed and untreated, and will aid in avoiding unknown exposure to health care workers and other patients. In the meantime, our experience suggests that non-systematic repeat testing due to resource restrictions is unlikely to be fruitful.
COVID-19 rages worldwide and causes more than 592,495 as of July 17, 2020. Given the absent treatment of proven efficiency and limited understanding of 2019-nCoV, gearing up for a protracted war against the pandemic needs detailed research. Angiotensin I converting enzyme 2 (ACE2), a zinc metalloprotease which shares homology with ACE, is the entry molecule of 2019-nCoV. 2019-nCoV binds to ACE2 through its surface spike protein and hence infects the target cell, causing severe injuries [1] . Gaining insight into the regulation of ACE2 is therefore pivotal in a bid to develop potential alternatives for COVID-19 [2, 3] . ACE2 is widely known as a negative regulator of the Renin-Angiotensin System (RAS). Besides, it is highly expressed throughout the gut system (https://www.proteinatlas.org/ENSG00000130234-ACE2/tissue). Kotfis et al. reported the appeared gastrointestinal diseases in COVID-19 patients [4] . Moreover, COVID-19 patients have significant alterations in fecal microbiomes compared with healthy volunteers, and during hospitalization microbiomes associated with reduction of ACE2 correlate inversely with 2019-nCoV load [5] , suggesting the involvement of ACE2 in 2019-nCoV replication. Sander et al. further confirmed that ACE2 upregulation exacerbates the outcomes of COVID-19 by facilitating 2019-nCoV entry into the host cell [6] . Logically, the implication of ACE2 blockers (ACEIs) would constitute a novel strategy to treat COVID-19. However, some unpredicted side effects induced by ACEIs may occur due to the pivotal role of the ACE2/RAS axis in maintaining a variety of cellular processes, including autonomous regulation in gut [7, 8] . Therefore, the safe administration of ACEIs has been intensely discussed. Zhang et al. reported a lower risk of mortality of COVID-19 patients with ACEIs administration than those with non-ACEIs/ARBs antihypertensive drugs [9] . Emerging studies also supported the continuation of ACEIs or ARBs in COVID-19 patients due to a lower risk of mortality [10e13]. Nevertheless, impaired ACE2 expression results in intestinal dysbiosis [14] and exacerbates inflammatory phenotypes [15e17] . Exploring the role of ACE2 in gut homeostasis and elucidating the pertinent mechanism are therefore urgent for developing adequate treatment options. In the present study, we examined the expression of ACE2 in gut, monitored the influence of ACE2 on the regenerative capacity of ISCs, explored the consequences of ace2 deficiency in IBD mouse models. All procedures and assays were approved by the Institutional Animal Care and Use Committee of Jining Medical University. Ace2 knockout mice on the C57BL/6 background and wild type littermates were purchased (Cyagen, China). All animals were housed at sperm free animal facility at Jining medical university. All procedures involving animals were conducted within IACUC guidelines under approved protocols. 4% dextran sodium sulfate (DSS, MP Biomedicals) was added to drinking water for 5 days. IBD was scored by the following standards: (i) weight loss (no loss ¼ 0; <5% ¼ 1; 5e10% ¼ 2; 10e20% ¼ 3; >20% ¼ 4); (ii) stool (normal ¼ 0; soft, watery ¼ 1; very soft, semi-formed ¼ 2; liquid, sticky, or unable to defecate ¼ 3); (iii) bloody stool test (no blood test positive within 2 min ¼ 0; purple positive after 10 s ¼ 1; light purple positive within 10 s ¼ 2; heavy purple positive within 10 s ¼ 3) (Leagene); (iv) Histological injury and inflammation were scored by parameters including edema, destruction of the epithelial monolayer, crypt loss, infiltration of immune cells into the mucosa. Tissues were fixed and then embedded in paraffin. 3 mm slice was sections stained with hematoxylin and eosin (H&E). For immunofluorescence, gut tissue slides or intestine organoids were blocked with 3% BSA, 0.2% TWEEN 20 in PBS, and incubated with primary antibodies (1:100 dilution) (MUC2 Monoclonal Antibody, KI-67 Antibody, LGR5 Monoclonal Antibody, Alexa Fluor™ 488 Phalloidin, MitoSOX™ Red Mitochondrial Superoxide Indicator). After PBS washing, Goat anti-Mouse IgG (H þ L) Alexa Fluor 488 and Dapi was loaded for 60 min and 15 min, respectively. Mice were fasted for 4 h and orally gavaged with fluorescein isothiocyanate (FITC)-dextran (average molecular weight: 4,000, 0.6 mg/g) (MedChemExpress). Fluorescence intensity of plasma was measured in 4 h (excitation nm/emission 520 nm). Meanwhile, the albumin level in fecal was measured by ELISA (Bethyl Laboratories) according to the manufacturer's protocol. Intestine organoids were isolated and cultured according to the protocol (Stemcell technology). Small intestine was washed with cold PBS for 15 times until supernatant was clear. Resuspended the intestine in cell dissociation reagent (Stemcell technology) for 15 min at room temperature. Removed the liquids and washed the digested intestine with PBS. The supernatant was filtered through 70 mm cell strainer. Centrifugated the fractions at 1300 rpm for 5 min and resuspend the pellet in the complete intestiCult organoid growth medium (Stemcell technology). The medium was exchanged every two days. Intestinal organoids were loaded with Fura-2 (2 mM, MedChe-mExpress) for 15 min and mounted with an inverted phase-contrast microscope (Zeiss). Calcium imaging was performed with the calcium imaging set-up (Molecular Devices) and data was analyzed by Metafluor. Tissue or organoids were lysed with RIPA cell lysis buffer (Roche). The extracts were centrifuged at 13,000 rpm for 20 min at 4 C and the protein concentration of the supernatant was determined. Total protein (60 mg) was subjected to 8% SDS-PAGE. Proteins were transferred to a nitrocellulose membrane (VWR) and the membranes were then blocked overnight at 4 C with 10% non-fat dried milk in tris-buffered saline (TBS) containing 0.1% Tween-20. The membranes were incubated overnight at 4 C with the antibody directed against (MUC2 Antibody, KI-67 Antibody, LGR5 Antibody) (1:1000) (ThermoFisher). A GAPDH antibody (1:1000) (Thermo-Fisher) was used for a loading control. Specific protein bands were visualized after subsequent incubation with the anti-rabbit IgG conjugated to horseradish peroxidase and a super signal chemiluminescence detection procedure. Total RNA was extracted according to the manufacturer's instructions (TAKARA). After DNAse digestion reverse transcription of total RNA was performed using Transcriptor High Fidelity cDNA Synthesis kit (TAKARA) according to the manufacturer's instructions. Polymerase chain reaction (PCR) amplification of the respective genes were set up in a total volume of 20 ml using 40 ng of cDNA, 500 nM forward and reverse primer and 2x GoTaq® qPCR Master Mix SYBR Green (TAKARA). Cycling conditions were as follows: initial denaturation at 95 C for 5 min, followed by 40 cycles of 95 C for 15 s, 55 C for 15 s and 72 C for 30 s. For the amplification the following primers were used (5`->3 0 orientation): fw CACTCCTGCCACACCACGTT; rev TGGTCTTTAGGTCAAGTTTACAGCC. Tbp(TATA box-binding protein). fw CACTCCTGCCACACCACGTT; rev TGGTCTTTAGGTCAAGTTTACAGCC. Specificity of PCR products was confirmed by analysis of a melting curve. Real-time PCR amplifications were performed on a CFX96 Real-Time System (Bio-Rad). All experiments were done in duplicate. Amplification of the house-keeping gene Tbp was performed to standardize the amount of sample RNA. Relative quantification of gene expression was achieved using the Dct method. Data are provided as means ± SEM, n represents the number of independent experiments. All data were tested for significance using Student's unpaired two-tailed t-test or ANOVA and only results with p < 0.05 were considered statistically significant. Ace2 knockout mice were included and colon tissue was isolated. As illustrated in Fig. 1AeB , western blotting and RT-PCR confirmed the expression of ACE2 in the colon and validated ace2 knockout mouse strain. Immunofluorescent assay further showed the ubiquitous abundance of ACE2 in the colon (Fig. 1C) . By organotypic cultures and ace2 genetic mice, we ascertained the high expression of ACE2 in intestinal organoids (Fig. 1D ). To determine the function of ACE2 on intestinal barrier function, we isolated intestinal organoids from ace2 þ/þ and ace2 À/À mice. Ace2 deficiency did not cause obvious structure disruption, as reflected by phallodin staining (Fig. 2A) . Of note, LGR5 (marker of ISCs) and KI67 (marker of proliferation) were markedly lower in ace2 À/À organoids than that in ace2 þ/þ organoids ( Fig. 2A) . Organotypic analysis further showed that ace2 deficiency led to a slower regeneration of intestinal organoids, as evidenced by a smaller size and slowed differentiation (Fig. 2B) . In full accordance with the findings above, western blotting showed that the expressions of LGR5 and KI67 in ace2 À/À organoids were significantly lower than that in ace2 þ/þ organoids (Fig. 2C ). Ace2 deficiency led to an increased FITC fluorescence in organoids, suggesting that ACE2 contributes to intestinal barrier function (Fig. 2E) . Much less expected was the unchanged plasma level of FITC in ace2 KO mice after oral gavage of FD-4, which may be explained by the various mechanistic pathways involved in the maintenance of mucosal complexity in vivo. The lack of obvious barrier dysfunction would suggest ACE2 likely become more relevant under pathophysiological settings, such as IBD. As illustrated in Fig. 3AeC, ace2 deficiency exaggerated the progression of IBD, as evidenced by more pronounced weight loss, earlier bloody feces and higher feces album level. Morphological assays showed that ace2 À/À mice were more susceptible than ace2 þ/ þ mice to intestinal architecture destroy and crypts loss as well as infiltration of inflammatory cells, obvious edema and reduction in colon length (Fig. 3D) . Additionally, marked loss of LGR5 and MUC2 was observed in ace2 À/À IBD mice (Fig. 3E ). Proper intracellular calcium concentration is a necessity to maintain gut homeostasis. Calcium overload and ROS production are equally involved in cellular dysfunctions, such as regeneration of stem cells. To explore the nature and mechanism whereby ace2 deficiency undermines the virtue of ISCs and hence the epithelial renewal, intracellular calcium concentration was examined by calcium imaging. As demonstrated in Fig. 4A and B, ATP-induced [Ca 2þ ] i was significantly higher in ace2 À/À organoids than that in Fig. 1 . ACE2 is highly expressed in gutA. Original western blot pictures and bar charts (¼ 4) illustrating the protein abundance of ACE2 in the colon tissue and organoids isolated from ace2 þ/þ and ace2 -/mice. B. RT-PCR results illustrating the mRNA level of ace2 in the colon and organoids isolated from ace2 þ/þ and ace2 -/mice. C. Immunofluorescent staining of ACE2 in the colon from ace2 þ/þ and ace2 -/mice. D. Immunofluorescent staining of ACE2 in the organoids isolated from ace2 þ/þ and ace2 -/mice. *(p < 0.05),** (p < 0.05), *** (p < 0.001) indicate significant difference (two-tailed unpaired t-test). ace2 þ/þ organoids, indicating a calcium overload caused by ace2 deficiency. Moreover, elevated ROS production was observed in ace2 À/À organoids. The coronavirus pandemic rages the world and causes various challenges for gastroenterology. Given that IBD patients may be particularly susceptible to 2019-nCoV, efforts are underway to investigate the role of ACE2 in the gut system. Accumulating evidence indicate that modulation of ACE2 expression influences the severity of colitis [18, 19] . However, there has been a debate with respect to the role of ACE2 in the gut. Elevated ACE2 level is observed in the colon isolated from the IBD mouse model [20] , and ACEI could markedly rescue the progression of DSS-induced experimental colitis [21] . Nevertheless, Hashimoto et al. reported that ace2 deficiency causes a high risk of colitis due to hampered immune cell trafficking as well as alterations of gut microbiota [19] . In full accordance with the findings above, our data showed that lack of ACE2 reduced the expressions of LGR5 and MUC2 in the colon, indicating a necessity of ACE2 in maintaining the stemness of ISCs. The barrier function requires the complexity of epithelial, which relies on the differentiation of ISCs. ISCs renew the epithelial lining and hence counteract epithelial damage, restoring the barrier function in response to various insults such as infections [22] . Intestinal organoids, deemed ideal for investigating gastrointestinal pathology, incorporate a diverse array of the physiological features of the in vivo intestinal tissue, including a polarized epithelial layer surrounding a functional lumen and all the cell types of the intestinal epithelium. We found that ace2 deficiency decreased LGR5 and KI67 expressions and slowed the development of intestinal organoids, indicating the necessity of ACE2 in supporting epithelial turnover and ensuring the healing after insults. Additionally, it is well-documented that intracellular calcium concentration plays a vital role in the differentiation of ISCs [23] . Calcium overload results in mitochondrial dysfunctions and elevated reactive oxygen species (ROS) in mitochondria production, which dictates cell fate [24e27]. We herein observed a markedly higher intracellular calcium concentration and ROS level in ace2 À/À intestinal organoids. We postulate that ACE2 may dictate the stemness of ISCs by orchestrating calcium perturbation. Lamers et al. reported the high expression of ACE2 in enterocytes and the readily infected enterocytes by 2019-nCoV [28] . However, Lines of evidence suggest the increased risk for 2019-nCoV infection of pregnant and older patients with IBD [29, 30] , pointing out the putative hazard of ACEIs or ARBs administration. In conclusion, neither ACE2 downregulation nor ACEI is the ideal treatment for COVID-19 patients with gut diseases, and combination therapy of ACEI and calcium blocker merits further investigation. Fig. 4 . Ace2 deficiency induces calcium overload in intestine organoidsA. Representative tracings showing the Fura-2 fluorescence ratio in the organoids isolated from ace2 þ/þ and ace2 -/mice. Arithmetic means (± SEM, n ¼ 5) of the slope and the peak of the change in Fura-2 fluorescence following readdition of Ca 2þ reflecting Ca 2þ entry in the ace2 þ/þ and ace2 -/organoids. B. Arithmetic means (± SEM, n ¼ 5) of the slope and the peak of the change in Fura-2 fluorescence following readdition of Ca 2þ reflecting Ca 2þ entry in the ace2 þ/þ and ace2 -/organoids. C. Immunofluorescence image of ROS in the ace2 þ/þ and ace2 -/organoids.
6 expression may be upregulated by mediators. For example, culture of primary human airway cells with interferon alpha 2 in vitro increases ACE2 transcripts (6) . Since the effect of traffic-related PM on the expression of ACE2 in human airway cell populations is not known we sought, in this study, to assess ACE2 expression in human airway epithelial cells exposed to traffic-derived PM 10 in vitro. Traffic-derived PM 10 was collected as dry particles using a high-volume cyclone placed within 2 metres of Marylebone Road, London, UK (8). Marylebone Road is one of the most polluted roads in Europe, with diesel trucks dominating near-road traffic-derived PM 10 emissions (9). In order to obtain milligram amounts of PM 10 , sampling was done between 6 to 8 h per day on 10 occasions between May and September 2019 (i.e. before the UK lockdown). PM 10 samples were pooled and stored at room temperature in a sterile glass container. An aliquot of PM 10 was diluted in Dulbeccos phosphate-buffered saline (DPBS) to a final concentration of 1 mg/mL and stored as a master stock at -20°C. Cigarette smoke extract (CSE) was collected onto a cotton filter through a peristaltic pump (Jencons Scientific Ltd., East Grinstead, UK) at a fixed rate from two Malborough red cigarettes, as previously described (10) . Cigarette smoke extract was extracted after vortexing in 2 mL Dulbecco's DPBS and stored at -80 0 C as 100% master stock. The human alveolar type II epithelial cell line A549 was purchased from Sigma- Culture of A549 cells with fossil-fuel derived PM 10 (0 to 20 µg/mL) for 2 h resulted in a concentration-dependent increase in ACE2 expression, with significant increase at both 10 µg/mL and 20 µg/mL (n=5, P<0.05, P<0.01 vs. medium control, Figure 1 ). At 20 µg/mL ACE2 increased by 16150 fold (IQR 2577 to 64758). Using a single concentration of PM 10 of 10 µg/mL, ACE2 expression increased in Figure 2B ). Culture of A549 cells with 5% CSE, a putative positive control, increased ACE2 expression (MFI, n=4, 0 (0 to 28) vs. 9088 (7557 to 15831), P<0.05, Figure 3 ). In this study we found that PM 10 , collected next to a major London road dominated by diesel traffic (8), upregulates ACE2 expression in a human type II pneumocyte cell line (A549 cells). We also found that traffic-derived PM 10 upregulates ACE2 expression in human primary nasal epithelial cells, suggesting that this response occurs throughout the respiratory tract. One strength of the present study is that collection of traffic-derived PM 10 by a high-volume cyclone obviated the need to extract PM from filters in solution, and we could therefore accurately determine PM 10 concentrations used in cell culture studies. Although the effect of PM 10 on ACE2 expression in human airway cells has not previously been reported, our findings are compatible with an animal study that reported lung ACE2 protein expression in wild type mice increased by 1.3 fold at 2 days post intratracheal instillation of urban PM 2.5 (11) . A putative protective effect of increased pulmonary ACE2 was suggested in this mouse model by complete recovery of PM-induced acute lung injury in wild type mice, and incomplete recovery in ACE2 knockout mice (11) . We therefore speculate that increased ACE2 expression may, on one hand, be a beneficial response to PM exposure, but on the other hand presents a Trojan horse to the SARS-CoV-2 virus. We included CSE as a putative positive control, since Leung et al (12) There are limitations to this study. First, we did not determine whether increased In conclusion, this study provides the first mechanistic evidence that traffic-derived air pollution increases ACE2 expression in human airway cells and therefore 1 3 vulnerability to SARS-CoV-2 infection. We conclude that there is biological plausibility for epidemiological studies reporting an association between either PM 10 or active smoking and COVID-19 disease. Effect
On February 20 th 2020, Coronavirus disease 19 severely hit the Northern part of Italy. It was reported that, in Lombardy, the most populated region of the country, more than 1500 patients required intensive care unit (ICU) admission over only 4 weeks, largely exceeding the actual capacity (1) . In the same period, the number of hospital admissions was 7285 (2) . Approximately 35% of these patients experienced Acute Respiratory Failure (ARF) requiring any form of respiratory support. A mathematical model of the occupation of intensive care resources in Italy predicted saturation of the theoretically available beds in the national territory by mid-April 2020 (3) . Under these circumstances, despite extraordinary efforts aimed at increasing the availability of ICU resources, the Italian Societies of Respiratory Medicine proposed a protocol to provide ventilatory support outside the ICU in dedicated Respiratory COVID Units, reinforced by a higher number of nurses and noninvasive monitoring (4) . This recommendation was somehow in contrast to most of the available guidelines that contraindicated using noninvasive respiratory support (NRS) in these patients due to the major concerns over using bio-aerosol producing techniques, because of possible contamination of the hospital staff (5) . This "emergency" situation gave us the unique opportunity to challenge the hypothesis that NRS should not be used outside the ICU during pandemics. We have therefore analyzed the feasibility and safety, in terms of staff contamination, of NRS applied to severely ill patients outside the ICU. Patients' characteristics and clinical outcomes were also analyzed. The study was conducted in four out of five hospitals in the Area Vasta Emilia network and in five hospitals in the neighbouring regions, serving a population of approximately 8 million people. Institutional Review Boards reviewed the protocol and authorized prospective data collection. Informed consent was waived. A confirmed case of COVID-19 was defined as a patient with a positive result on high throughput sequencing or real-time reverse transcriptase-polymerase chain reaction assay of nasal and pharyngeal swab specimens. Data were collected from registries of the Respiratory Disease Units coordinators at the nine hospitals identifying all of the patients receiving NRS outside the ICU. Excluding standard oxygen administration, patients were treated with three different types of NRS, namely high-flow nasal cannula (HFNC), continuous positive airway pressure (CPAP), or noninvasive ventilation (NIV), which also represented the three different groups in the analysis. The triage of patients was performed according to the Italian Respiratory Societies Joint Guidelines based on severity. In particular, the following categories were proposed: a) green (SaO 2 >94%, respiratory rate (RR)<20 breaths/min); b) yellow (SaO 2 <94%, RR>20 but responds to 10-15 L/min oxygen); c) orange (SaO 2 <94%, RR>20 but poor response to 10-15 L/min oxygen and requiring CPAP/NIV with very high FiO 2 ); d) red (SaO 2 <94%, RR>20 but poor response to 10-15 L/min oxygen, CPAP/NIV with very high FiO 2 or presenting respiratory distress with PaO 2 /FiO 2 <200 and requiring ETI and intensive care). Patients belonging to these latter two categories were therefore considered eligible for NRS in dedicated respiratory COVID areas (see below) set up for the isolation of confirmed cases and ARF treatment. These patients were not "usually" treated outside the ICU but, given the "emergency" situation, the lack of ICU beds and only once multiorgan disfunction was excluded, they still resulted eligible for an NRS trial. The transfer of severely ill patients to the ICU for intubation, with compromised haemodynamic parameters, low PaO 2 /FiO 2 or 'not responding to NRS', was discussed with the intensivists, based on prognosis, and obviously was only possible if beds were available. Although not specifically mentioned in the guidelines, HFNC was also used in these two categories, during breaks in ventilation or as a stand-alone support. The use of helmet CPAP devices was suggested as first-line treatment, mainly for safety reasons. Clearly, this technique requires a sufficient supply of helmet interfaces (which ran out quite rapidly) and a high flow of O 2 (which exceeded the O 2 capacity in some hospitals), so that NIV and HFNC were used as alternatives, the first when it was necessary to "save" oxygen, and the second when CPAP availability finished. The respiratory COVID areas consisted mainly of two different units, both present in all of the hospitals. The first one, formerly a respiratory ward, was an ad-hoc dedicated Respiratory Monitoring Unit consisting of specialized monitored areas with an active full-day shift run by a fixed group of pulmonologists and with a "reinforced" nurse-patient ratio varying from 1:4 to 1:6 depending on the hospital. The second unit, called Respiratory Intermediate Care Units, consisted of a fixed medical team. These had a monitoring system similar to that of the Respiratory Monitoring Units, together with the availability of ICU ventilators and a nurse-patient ratio from 1:2 to 1:4, where more severely affected patients were usually treated. Patients were continuously monitored with electrocardiogram trace, noninvasive blood pressure, arterial oxygen saturation, and respiratory rate (RR). Intensive Care Medicine doctors were eventually available around the clock at the request of the ward teams. Great care was taken to keep a distance of >1.5 metres between each bed and to provide natural ventilation and airflow of at least 160 L·s -1 per patient. Concerning staff protection, first of all, courses were quickly organized for staff in the correct use of personal protective equipment (PPE), dressing and undressing. Filtering facepiece class 3 (FFP3) or FFP2 masks, double non-sterile gloves, long-sleeved water-resistant gowns, goggles or face shields were mandatory in the presence of aerosol producing procedures. NIV was delivered mainly by dedicated single circuit NIV platforms provided with an oxygen blender and ad-hoc filters placed in the single tube circuit before the non-rebreathing devices to minimize bio-aerosol dispersion, or by ICU ventilators. HFNC was delivered using standard devices (Nasal High Flow Therapy, Fisher and Paykel Healthcare Ltd, New Zealand), while helmet CPAP dedicated devices, designed for pandemics, were simply activated by connecting them to the O 2 source available in the hospital with blender systems applied to obtain adequate values of delivered FiO 2 (Intersurgical SpA, Mirandola, Italy and Dimar srl, Medolla, Italy). Data were collected prospectively from registries of the Respiratory Disease Units identifying all of the patients receiving NRS outside the ICU. Variables recorded for each patient were obtained for the period from March 1 st until May 10 th 2020 and included the following: demographics (age, sex), comorbidities (type and number), respiratory condition at admission (respiratory rate (RR), PaO 2 /FiO 2 ratio), medications (type of drugs prescribed), mode and usage of the NRS (ventilatory settings for NIV and CPAP, and flow rate for HFNC), and stay in hospital (days). The number of patients who died, either in the respiratory unit or in the hospital, and the patients who received endotracheal intubation (ETI) within the same time frame were recorded. Patients who were still hospitalized at the time of data analysis were excluded. The health status of the staff working in the respiratory unit was closely monitored. All staff with fever or respiratory symptoms underwent chest radiography, and nasal and pharyngeal swab specimens were taken. Serology for SARS-CoV-2 antibodies and pharyngeal swab was also periodically performed for all staff. No statistical sample size assessment was performed a priori, and sample size was the number of patients treated during the study period in the participant centres. Baseline characteristics of patients treated with HFNC, CPAP and NIV were compared. Across the treatment subgroups, continuous variables were expressed as means and standard deviation (SD) and were compared with the Kruskal-Wallis test and one-way ANOVA test, while categorical variables were expressed as numbers and percentages (%) and were compared using the χ2 test or Fisher's exact test. Percentages of available data for the overall study population were based on the total number of patients included in the study, while the distribution of available data over the treatment subgroups was based on the available data for that variable, and the percentages were calculated using the number of available data for that subgroup. The fraction of infected professional health care workers was presented as numerical and percentage values. The association between ventilatory treatment and clinical outcomes was calculated using a logistic regression model. The 30day mortality rate was calculated adjusted for baseline confounders (age, P/F ratio, steroid usage and number of comorbidities). A total of 704 patients were considered and of these, 670 patients were included and their data analyzed. Table 1 lists the patients' characteristics. CPAP, as applied by helmet, was used on the majority of patients (Supplementary Table 1 ). Twenty-eight out of 670 (4.2%) had a Do Not Intubate (DNI) order. Figure 1 illustrates the patients' allocation to NRS and clinical outcome. A total amount of 180 patients died at 30 days. Twenty of the 28 DNI patients died (9% of the total number of deaths). In total, 114 patients died on spontaneous breathing without an expressed written DNI order. Most of the study patients were male (69.3%). Hypertension, diabetes, dyslipidemia, obesity and chronic cardiovascular disorders were the comorbidities most represented, evenly distributed among the groups with the exception of obesity, which was more prevalent in the NIV group. Hydroxychloroquine, methylprednisolone, low molecular weight heparin and tocilizumab were the drugs most used for treatment. The frequency distribution of age and PaO 2 /FIO 2 ratio in the whole study population are shown in Supplementary Figure 1 . Table 2 , 353 health care workers, including doctors, nurses, and health-care assistants, had been taking care of patients receiving NRS. Forty-two of them (12%) tested positive for SARS-CoV-2 infection showing symptoms of mild (n=9) or moderate disease requiring hospitalization (n=3). All infected workers recovered well. The overall rate of workers infected, in personnel not specifically involved in the care of COVID-19 patients, in the nine hospitals was 3.8±1.9%. Outcome measures stratified by PaO 2 /FiO 2 ratio classes and according to NRS are reported in Table 5 . Patients with a PaO 2 /FiO 2 ratio below 50 mmHg presented a higher 30-day mortality rate and a higher rate of ETI (p<0.001 and p<0.001, respectively). Supplementary Table 1 . NIV was used as much as the patients could tolerate and in a small percentage of cases (43/177=24%), HFNC was applied during the intervals. Patients with bilateral posterior infiltrates were also usually placed in the prone position for few hours a day, in all three NRS groups, with a schedule dependent on their tolerance. This study showed that using NRS devices is feasible in patients with ARF due to SARS-CoV-2 infection treated outside ICUs, in newly developed dedicated COVID Respiratory Monitoring Units, formerly respiratory wards, and in Respiratory Intermediate Care Units. Despite using the recommended PPE, a 11.4% contamination rate was observed among healthcare workers treating the infected patients. After adjusting for potential confounders, 30-day mortality rates using HFNC, CPAP and NIV were not significantly different. One of the major concerns of using bio-aerosol generating devices is that healthcare workers are at high risk of contracting the infection and therefore most international guidelines recommend being cautious or even contraindicate their use (6) (7) (8) . Nevertheless, WHO advocate using CPAP or NIV for the management of respiratory failure in COVID-19 patients, provided that appropriate PPE is worn by the personnel (9) . Several studies have found that the maximum exhaled air dispersion via different oxygen administration and ventilatory support strategies is minimal for CPAP through an oronasal mask or NIV through a helmet equipped with an inflatable neck cushion, and is much less when compared with any kind of oxygen delivery system (10). Interestingly enough, so far, studies have been conducted in negative pressure hospital rooms with at least six air changes per hour (minimum number of air changes recommended by WHO is 12 per hour). When these rooms were not available, as was the case for most of our patients, alternative hospital areas including rooms with natural ventilation (expressed as the product of room volume and air change rate) of at least 160 L·s -1 per patient were routinely employed, in keeping with the WHO statement (11) . Indeed, according to the Italian recommendations (4), the large majority of our study population received CPAP (by helmet or face mask), mask-NIV, and HFNC with a medical mask over the nasal prongs. Taking all these precautions into account and using all of the appropriate protection, the number of health workers who tested positive at serology or pharyngeal swab was still quite high (11.4%); however, those who became ill (12/369, i.e. 3.3% of the staff involved) were in line with the 3.5% of health care workers requiring hospital admission in China (12) , the only study so far that has reported this outcome during the COVID-19 outbreak. One may claim that our staff could have been infected in the community rather than by exposure to NRSs; however, in the nine hospitals in this study, the overall rate of infection of health workers, in personnel not specifically involved in the care of COVID-19 patients. The dramatic and rapidly increasing wave of the pandemic obliged us to treat a high number of severely hypoxic patients with NRS outside the ICU. These patients are usually admitted to "protected" environments. The ATS/ERS Guidelines (13) for example suggested using NIV in de-novo respiratory failure only when managed by an experienced clinical team, and closely monitored in the ICU. Concerning the first point, all of the Units involved had extensive experience in NRS use over a long period, and the nurse-patient ratio was "unusually" high for a ward, since during the outbreak, the nursing staff was reinforced in the locations where the acutely ill patients were admitted. In addition, fully equipped noninvasive monitoring systems were available. This is by far the largest report on the use of NRS outside the ICU; however, our COVID dedicated Respiratory Units cannot be considered equivalent to the "usual" respiratory wards. Previous studies conducted in ICUs where NRS use was reported (1,15-21) account for 188 patients treated with NIV and 61 by HFNC, without showing their characteristics and severity or the outcomes (in all but one study). Interestingly, this latter study (19) showed a very high mortality rate both with NIV and HFNC (80% and 52%, respectively). Indeed, only a few patients were treated in the respiratory ward or unit, namely 80 and 33 patients using NIV and HFNC respectively, with a poor survival rate (22, 23) . Despite the fact that comparison among studies is extremely difficult due to the potential heterogeneity of patients included and/or to differing local hospital organization, the failure rate (i.e. mortality and/or ETI) was much lower in our population, even when adjusted for potential confounders (see Table 3 ), and it was comparable to what was observed (26%) in a large Italian study performed in the ICU in patients mostly intubated and with a PaO 2 /FiO 2 ratio similar to ours (1). In addition, in a recent two-period retrospective case-control study, Oranger et al. (24) demonstrated that CPAP could avoid intubation at 7 days and at 14 days, particularly in COVID patients with a previous DNI decision. The mortality rate was similar with all of the NRS modes used after adjustment for confounders; however, it has to be noted that HFNC was usually applied in less sick patients compared with NIV and CPAP, and this may reflect the attitude of the clinicians to start these latter two modes in patients where they judged that applying a relatively high level of external positive end expiratory pressure (PEEP) was more appropriate. It has been suggested that using any form of NRS might unduly delay the start of ETI; however, it should be noted that 28 patients received a DNI order (20 of them died), and that an ICU bed was not promptly available at the time of deterioration, as reported in a specific small subset of the patient population. It may also be argued that "only" less than 5% of patients signed a DNI order. In Italy, the very large majority of the population are not sufficiently aware of the new Advanced Directive Law (25), or they do not want to complete in advance any document in this respect. Therefore, most of our patients arrived at hospital without any DNI or Do Not Resuscitate directives. The reasons for not proceeding to ETI in the absence of a written DNI order might be explained by: presumed lack of benefit from ETI or mechanical ventilation (MV) based on clinical judgement, sudden death, or verbal refusal from the patient at the time of clinical deterioration. However, the majority of patients received "full treatment" when needed. Despite the fact that this retrospective analysis in a large population indicates that NRS may help to treat severely affected COVID-19 patients outside the ICU, in newly dedicated respiratory areas with experienced staff, it also presents three main limitations. First, the design was retrospective, like most of the studies published during this terrible period. Second, the decision to start one of the NRS modes was left to the attending physicians and mainly relied on the actual availability of equipment, so that the proportion of devices used was not evenly distributed. Third, as in most reallife studies dealing with the COVID-19 pandemic (1), missing data may be quite relevant; however, the critical nature of the situation did not always allow detailed information to be collected. To conclude, this is the first observational, large multicentre study showing that the application of noninvasive respiratory devices outside the ICU is feasible but is associated with a risk of staff contamination; however, the retrospective study design precludes drawing firm conclusions about its effectiveness despite the fact that the mortality and intubation rates compare favourably with those of previous reports.
SARS-CoV-2 virus has caused the most significant pandemic (COVID-19) in recent history. Within 5 months of its emergence in December 2019, the virus has spread to more than 210 countries, and has caused greater than 170,000 deaths and more than 2.5 million reported cases as of April 22, 2020. This virus is a positive strand RNA virus with a genomic length of about 30K (29,903 nucleotides in the reference genome), and is evolving rapidly. Aged mutations are persisting or diluted away and new mutations are arising. Mutations may increase the fitness of the virus to the environment, while elevating the risk of drug resistance, altering the case fatality rate, and reducing the efficacy of vaccines. Excluding the 5' leader and 3' terminal sequences, the genome contains 11 coding regions including S, E, M, N and several open reading frames (ORF1ab, ORF3a, ORF6, ORF7a, ORF7b, ORF8, and ORF10) with various lengths and biological implications. Based on the DNA sequence data of 1,932 SARS-CoV-2 strains from GISAID, NCBI Genbank, and CNCB (data release: March 31, 2020), we performed a phylogenetic analysis based on the full genomes, characterized the geographic and temporal patterns of aged and new mutations, examined genomic profile and identified frequent mutations, inferred linkage disequilibrium (LD) and haplotype structure, constructed the evolutionary paths, correlated phylogenetic clusters with mutations, and investigated how the mutation count influenced the case fatality. Based on the whole-genome sequence data, the phylogenetic tree of 1,932 SARS-CoV-2 strains showed there are six major groups (Figure 1) : (1) Europe-1 (76% of 810 strains are from Europe, with the majority from Iceland); (2) Oceania/Asia (38.6% and 38.6% of 57 strains are from Oceania (all of the Oceania strains are from Australia in this data set) and Asia, respectively, with the majority from Australia and China); (3) Americas (94.4% of 342 strains are from Americas, with the majority from the United States); (4) Europe-2 (80.8% of 104 strains are from Europe, with the majority from Great Britain and Iceland); (5) Asia-1 (72.2% of 133 strains are from Asia, with the majority from China); and (6) Asia-2 (55.5% of 364 strains are from Asia with the majorities from China and Japan). We used the strain originally isolated in China (1) as the reference genome and defined variations from the reference as mutations. Among 1,932 SARS-CoV-2 strains studied, the average mutation counts per sample in Europe and Americas were much higher than that in Asia (Figure 2A) . This phenomenon partially reflects the early occurrence of epidemic of COVID-19 in Asia. Among the nations having at least 10 cases reported in this dataset, the top three nations with the highest average mutation counts were all located in Europe: Spain, Belgium, and Finland. The three nations with the lowest value were located in Asia: Singapore, Japan, and China. The six major clusters derived from the phylogenetic analysis (Figure 1 ) also showed more mutations in the European and American clusters compared to the two Asian clusters. The average mutation counts per sample in Europe-1, Oceania/Asia, Americas, Europe-2, Asia-1, and Asia-2 were 6.60, 6.46, 6.67, 6.21, 4.07, and 2.68, respectively. Mutations were distributed across the genome in different patterns according to the geographic locations that the samples were isolated. Mutation densities (i.e., the number of mutations per nucleotides in the gene region) were high in several gene regions including ORF3a (Luxembourg, France, and Singapore), ORF8 (the United States, Korea, and China), N (Switzerland, Ireland, and Portugal), and ORF10 (Japan and Spain) (Figure 2A) . ORF1ab harbors a significant number of mutations, but because of its length the mutation density appears to be low. The United States had the largest sample size (N = 510) and the average mutation count was 6.42. Minnesota and Arizona showed the highest and lowest average mutation counts in the US (Figure S1 ), respectively. The dominant mutation types in the east (NY) and the west (CA, WA, and UT) were different, with mutations in ORF3a dominating in the former, while mutations in ORF8 in the latter states. We found a total of 169,060 mutation counts. The average mutation count per locus was 169,060/29,903 = 5.654. The average mutation count per sample was 169,060/1,931 = 87.55. The average mutation count per locus per sample was 0.298%. After removing the mutations in 5' leader and 3' terminal sequences, the total mutation counts for all samples across genome reduced to 11,223. The average mutation count per locus was 11,223/29,409 = 0.3816. The average mutation count per sample was 11,223/1931 = 5.812. The average mutation count per locus per sample was 0.0198%. Globally, the average mutation count per sample was increasing with time (r = 0.65 in a linear regression) ( Figure 2B ). This result indicates that the mutated strains persisted and were expanding. A large number of the mutations originated from China at the early stage have overwhelmed the samples collected from Americas and Europe after a few months. In addition to the cumulated aged mutations, we also identified a significant number of new mutations occurring at each date. Unlike the increasing trend of the cumulated aged mutations, the number of new mutations occurring globally at each date remained at a constant level. The major contributor of the new mutations was China at the early stage but shifted to Americas and Europe after mid-February ( Figure 2C ). Fourteen frequent mutations with a mutation frequency of >0.1 were identified (Table S1 and Figure 3A ). These frequent mutations showed interesting patterns. First, the mutations have a similar mutation rate if they were first observed at the same or close dates and in strong LD. For example, nt8782 (C to T) and nt28144 (T to C) that were used to define L type and S type of SARS-CoV-2 (2) first co-appeared on Jan 5th 2020 and the coefficient of LD is R2 = 0.983 ( Figure 3A) . The reference strain used here is the L type, while the S type exhibits both mutations. This implies that the frequent mutations on the same haplotype were co-transmitted to the infected cases during infection. The haplotypes were located in the same or across different gene regions. Other cases first co-observed at the same date included nt1059 (C to T) and nt25563 (G to T) (Feb 21st 2020); nt28881 (G to A), nt28882 (G to A), and nt28883 (G to C) (Feb 25th 2020). Nucleotides nt3037 (C to T), nt14408 (C to T), and nt23403 (A to G) having high pairwise LD are the most frequent nucleotides in the current data set. Nucleotides nt17747 (C to T), nt17858 (A to G), and nt18060 (C to T) having almost identical nucleotide frequency also showed high pairwise LD. Nucleotide nt11083 had two alternative nucleotides (G to T or C) and was not correlated to the other frequent mutations. Second, the evolutionary path of the fourteen frequent mutations was constructed ( Figure 3B and 3C) based on their first observation time. Note that the real occurrence date of mutations should be earlier because of left censoring of mutation events. The haplotype frequency of the reference strain without any of the fourteen frequent mutations was 16.86% and was exceeded by several mutated haplotypes ( Figure 3C ). The haplotype frequencies varied across continents and nations. For example, the mutation haplotype 8782T-28144C was prevalent in the US but mutation 11083C/T and mutation haplotype 28881A-28882A-28883C were not), reflecting the different transmission paths of viral strains and their different ability to adapt to the local environment ( Figure 3D ). It is also interesting to see the time trajectory of the average mutation count in the fourteen frequent mutations ( Figure 2D ). Nucleotides nt14408, nt23403 and nt3037 were in strong LD and have emerged to be the most prevalent mutations. The raising patterns of the average mutation count may imply natural selection and adaptation to the different environment. Unexpectedly, the earliest mutations reported were in nt8782 and nt28144 in this data set, also with strong LD, but their frequency has gradually declined. Strains with the relatively new mutations in nt14408, nt23403 and nt3037 have emerged to become the most common strains identified in the world, particularly in subgroup Europe-1. The major clusters of SARS-CoV-2 strains identified in the phylogenetic tree can be characterized by the key mutations (Figure 1) , most of which were also shown in Figure 2D . Two European groups had quite different mutation patterns. The Europe-1 group carried the specific mutations nt3037T (ORF1ab), nt14408T (ORF1ab), and nt23403G (S), in addition to the nt241T mutation in 5' leader sequence. These mutations were hardly observed in other groups. The Europe-2 group is characterized by mutations nt11083T (ORF1ab), nt14805T (ORF1ab), and nt26144T (ORF3a). The Oceania/Asia group carried mutations nt1397A (ORF1ab), nt11083T (ORF1ab) and nt28688C (N). The Americas group carried mutations nt8782T (ORF1ab) and nt28144C (ORF8), which were used to define the S sub-type (2). These two mutations were also observed in the Asia-1 group, but not in the other groups. Additionally, the Americas group also carried mutations nt17747T (ORF1ab), nt17858G (ORF1ab), and nt18060T (ORF1ab). The Asia-2 group is distinct from others because the strains in this group did not carry most of the frequent mutations. On average, the dates of sample collection of the two Asian groups were closest to the first emergence of COVID-19 in December 2019 and their genomes were closest to the reference. They were characterized by multiple mutations with low frequencies. Only about 6% of viral strains cannot be categorized into the six main phylogenetic clusters. Finally, a point worth noting is the strong linear correlation between the case fatality rate and the average mutation count per sample (r = 0.4258, p = 0.0482) as of April 9, 2020 ( Figure 4A ). Among eleven gene regions, only ORF1ab showed a significant linear correlation (r = 0.407, p = 0.0542) (Figure 4B) , suggesting that ORF1ab mutation may contribute the most to the case fatality rate. Surprisingly, by April 21, 2020 the correlation between case fatality and the average mutation count has become even more significant with p= 0.0348 (Figure 4C) , and its contribution from ORF1ab has a p value of 0.0291 ( Figure 4D ). These results indicate that mutation has already impacted clinical outcome of COVID-19, not just a viral fitness evolution event. In summary, we report evidence for time and geographic variations of SARS-CoV-2 mutations, and identified six major subgroups of SARS-CoV-2 strains with strong geographic preferences based on the complete genomes of 1,932 SARS-CoV-2 strains. These subgroups can be characterized by 14 common mutations, most of which occurred in ORF1ab, with notable exceptions in S, N, ORF3a and ORF8. This result suggests the importance of these genes in viral fitness and perhaps clinical relevance. Unexpectedly, we found that the case fatality rate in each country correlates positively with the average mutation count per sample. The positive correlation was contributed mainly from gene region ORF1ab, which codes for polyproteins that are cleaved to become proteases, RNA-dependent RNA polymerase, helicase, and several nonstructural proteins (nsp). The significance of the correlation increased with time, suggesting that mutation may have impacted viral transmission, clinical manifestation, or treatment outcomes. Compared to the L and S types originally reported (2) , and the A, B, C types reported recently (3), our classification of six subgroups provides a mutation-based taxonomy of viral strains and explains the heterogeneity of strains within each of L and S types and A, B, C types. Interestingly, 5 out of 6 subgroups are characterized by a few common mutations in each subgroup. This concept and method for classification and characterization of viral strains can be applied to other viruses of public health concerns. As more whole genome sequencing data of SARS-CoV-2 become available online, we will be better-posed to decipher and understand genomic, geographic and temporal distributions of viral mutations. However, representativeness of the submitted genomic data should be considered carefully when explaining the results. The different protein reading frames exhibited different mutation frequencies depending on the geographic location. We dissected the prevalent mutations, investigated their LD structure, traced their evolutionary path, and found their importance in the characterization of phylogenetic clusters. Interestingly, many frequent mutations occurred simultaneously, with LD close to 1. These mutations may occur simultaneously because of strong positive interactions. Alternatively, they might occur sequentially but appeared simultaneously due to insufficient sampling. The biological implication of each mutation and their interactions remain an interesting topic to be explored. As aged mutations continued to accumulate in time and expand globally, new mutations emerged in the gene pool constantly to enrich the diversity of mutations. Since the fatality rate appears to correlate with the mutation count, the continued isolation and sequencing of the viral genome to monitor mutations become a crucially important component in the fight against this pandemic. We downloaded the whole-genome sequence data from the Global Initiative on Sharing Avian Influenza Data (GISAID) database (https://www.gisaid.org/), National Center for Biotechnology Information (NCBI) Genbank (https://www.ncbi.nlm.nih.gov/genbank/), and China National Center for Bioinformation (CNCB) (https://bigd.big.ac.cn/ncov/release_genome) on Mar 31st 2020. After discarding the replicated sequences in the three databases and the sequences with a low quality indicator or no quality information, it remained the complete sequences of 1,938 SARS-CoV-2 genomes, including the Wuhan-Hu-1 reference genome with 29,903 nucleotides. Multiple sequence alignment was performed by using MUSCLE (4) . Mutation was identified as variation from the reference. Generalized association plot (GAP) (5) was used to visualize the mutation patterns and identify the outliers in variations and samples ( Figure S2 ). We removed two ends (5' leader and 3' terminal sequences) of the SARS-CoV-2 genome because of a significant number of gaps. That is, we focused on nucleotides from positions 266 to 29,674. Before removing the two ends, we observed two nucleotides with a non-negligible mutation frequency relative to their neighboring regions: nt241 with a mutation frequency of 45.5% (and this nucleotide was in high LD with nt3037, nt14408, and nt23403) and nt29742 with a mutation frequency of 5.3%. We also removed four samples with a large deletion in ORF8 (sample EPI_ISL_417518 from Taiwan with EPI_ISL_141378, EPI_ISL_141379, and EPI_ISL_141380 from Singapore), sample EPI_ISL_415435 from UK with a large deletion in ORF1ab, and sample EPI_ISL_413752 from China with a large number of deletions (>300 nucleotides). A new mutation at time t was defined as a mutation that has never been observed before t in this data set. Average mutation count per sample and/or per locus were calculated for nations and for the data collection time points to study geographic and temporal distributions of mutations. Mutation frequencies in gene regions were illustrated and coefficient of LD (R2) between pairs of nucleotides was calculated by using PLINK (6) . Frequent mutations were defined as mutations with a frequency of >0.1 in this paper. Annotation of the frequent mutations was collected from China National Center for Bioinformation. Nucleotide frequency and haplotype frequency were calculated by using a direct counting method. Phylogenetic tree analysis was performed by using MEGA X (7) . GAP was applied to present the relationship between phylogenetic cluster and mutations. Finally, the data of case fatality rate was collected the website of Coronavirus Resource Center, Johns Hopkins University on Apr 9th 2020 and Apr 21st 2020. A linear regression model was built to correlate the average mutation count per sample and case fatality rate. Other statistical graphs were generated using our self-developed R programs. Table S1 . Annotation of the fourteen frequent mutations Figure S1 . Geographic distributions of the average mutation counts in the entire genome and the standardized mutation densities in the eleven gene regions in the United States. A mutation is defined by a nucleotide change from the original nucleotide in the reference genome to the alternative nucleotide in the studied viral genomes. In each nation, the average mutation count per sample (i.e., the number of mutations in the genomes of all virus strains divided by the number of strains) is displayed in a color spectrum from blue (low average mutation count) to red (high average mutation count). The statistics of the average mutation count per sample are provided. Mutation density in each of the eleven gene regions (i.e., the number of mutations divided by the number of nucleotides in a gene region for all virus strains) is standardized (i.e., the mutation density in each gene divided by the sum of mutation densities in the eleven gene regions) and shown in a pie chart. A table of the average mutation count per sample for the states having at least 10 cases reported in this dataset is listed. 23 26 30 01 02 05 07 08 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 20 21 22 23 24 25 26 27 28 29 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 Dec Jan Feb Mar 2020 2019 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q (B) ( ) Proportion 。 T""" 8 · 0 9 · 0 寸 · 0 z · 0 0 · 0 nt14408 (ORF1ab) nt23403 (S) nt3037 (ORF1ab) nt28883 (N) nt28882 (N) nt28881 (N) nt25563 (ORF3a) nt1059 (ORF1ab) �(ORF1at nt28144 (ORF8) nt8782 (ORF1ab) nt18060 (ORF1ab) 辜琴孚�罕竺竺(B) q CT q TC q CC q CGT q TCC q TGC q CGC q CTT q TTC q CGCT q CTCT q TCCC q TGCC q TGTC q TTCC q CGCC q CCGCAT q TCGCGT q CCTCAT q CTCCAC q CTGCAC q CTGTAC q CTTCAC q CCGCAC q CTGCGC q CCGCCACAT q TCGCCACGT q TCGTCACGT q CCTCCACAT q CTCCCACAC q CTGCCACAC q CTGCCACGC q CTGCCATAC q CTGCTGTAC q CTTCCACAC q CCGCCACAC q CCCGCCACAGT q CTCGCCACGGT q CTCGTCACGGT q TTCGTCACGTT q CCCTCCACAGT q CCTCCCACAGC q CCTGCCACAGC q CCTGCCACGGC q CCTGCCATAGC q CCTGCTGTAGC q CCTTCCACAGC q CCCGCCACAGC q CCCGCCACAGTGGG q CTCGCCACGGTGGG q CTCGTCACGGTAAC q CTCGTCACGGTGGG q TTCGTCACGTTGGG q CCCTCCACAGTGGG q CCTCCCACAGCGGG q CCTGCCACAGCGGG q CCTGCCACGGCGGG q CCTGCCATAGCGGG q CCTGCTGTAGCGGG q CCTTCCACAGCGGG q CCCGCCACAGCGGG q CTTGTCACGGTGGG q CTCGTCACGTTGGG q TCCTCCACAGTGGG q CTCTTCACGGTGGG q CTCTTCACGGTAAC q CCTGCCGTAGCGGG q CCTGCCACAGTGGG q CCTTCTGTAGCGGG q TTCGTCACGGTGGG q CCCTTCACAGTGGG q CCTGCTGCAGCGGG q CCCGTCACGGTGGG q CCCTCCATAGTGGG q TTCTTCACGTTGGG q TCCGTCACAGTGGG q CCCGCCACAGTGGG Virus strains to be excluded from further analyses Figure S2
See Online for appendix The COVID19 pandemic has led to unprecedented challenges. In particular, the impact of COVID19 on neurological services and patients has been immense, as highlighted in a recent Editorial. 1 Despite the heightened burden felt by neurologists, the dedication of our authors and reviewers is not waning. Submissions to The Lancet Neurology between Jan 1 and June 14, 2020, increased by around 70% compared with the same period last year. Unabated expert advice from our clinical and statistical reviewers from around the world is ensuring the con tinued publication of the highest quality research and reviews for our readers. The journal continues to strive towards the goal of the Lancet group of disseminating the best science for better lives. Our achievement is reflected by the continued place ment of The Lancet Neurology as the leading clinical neurology journal, according to the 2019 Journal Citation Report. The names of everyone who reviewed papers for the journal throughout 2019 are listed in the appendix; those who reviewed five papers, or more, are marked with an asterisk. We extend our warmest gratitude to all these reviewers. Deputy Editor, The Lancet Neurology, London, EC2Y 5AS, UK 1 The neurological impact of COVID19. Lancet Neurol 2020; 19: 471.
In the context of COVID-19 worldwide outbreak, first reports are being published on the potential value of PET imaging. A total of 5 highly suspected or confirmed COVID-19 cases explored with 18 F-FDG PET have been so far described by Qin et al. [1] and Zou and Zhu [2] , at the diagnostic step, showing lung hypermetabolic pulmonary ground glass opacities with low-dose CT correspondence, frequently associated to lymph nodes hypermetabolism. More broadly, Deng et al. [3] also argue for the possible 18 F-FDG PET utility, as a sensitive tool to detect and monitor inflammatory diseases, such as viral pneumonia, monitor disease progression, and treatment outcomes, according to the major goals of precision medicine in which PET imaging is well-known to be crucially involved [4] . On the opposite, Joob and Wiwanitkit have recalled that 18 F-FDG PET is still not recommended in infectious pneumonia, and especially warned of the risk of disease spreading in PET departments [5] . Besides these justified arguments, on one hand, the potential interest of PET imaging to better understand and characterize the disease, especially perhaps between the infectious and immune phases of the disease [6] , with possibly also the interest of the targeted development of ImmunoPET in this indication [7] , and on the other hand, the genuine risk of viral propagation, the PET exploration of suspected or positive COVID-19 cases is more simply today unrealistic because of the number of concerned patients. For example, and while the epidemic peak is still supposed not to be reached, 2365 patients have been newly hospitalized in France with a COVID-19 confirmed diagnosis on the single day of March, 26th 2020 (https://www.gouvernement.fr/info-coronavirus/ carte-et-donnees), while our national PET capacity is currently estimated to approximately 2000 exams per day ( h t t p s : / / w w w. s f m n . o r g / d r i v e / S E C R E TA R I AT % 2 0 G E N E R A L / E N Q U E T E _ A N N U E L L E / EnqueteNationale2019_publicWeb.pdf). We have also to note that these 18 F-FDG PET exams are associated with more complex logistic and long procedures, in particular for the disinfection between patients. Above all, the feasibility issue has to critically integrate the necessary continuation of management of non-COVID-19 diseases with explorations that cannot be canceled or delayed without causing a loss of chance for the patients, both for acute and chronic diseases, and especially for cancer diagnosis and evaluation. In this line, targeted explorations based on precision medicine seem hardly compatible with mass exploration of numerous patients without delay and in a very short time period. Besides cutting-edge PET investigations currently applied on selected patients and on specific indications, this contradiction highlights the further need of development of light PET protocols as previously developed in radiology for today the very useful low-dose thoracic CT [8] , allowing broader exploration availability with shorter procedures for acquisition duration and perhaps also for the uptake period, but probably by preserving the whole-body exploration to better characterize the extension of the disease and its prognosis. New technological achievements based on ultra-low dose whole-body PET instrumentation [9] combined with deep neural networks for reconstruction including generative adversarial networks [10, 11] constitute a great opportunity for such developments which need to be encouraged, with also ultimately possible larger applications in other contexts for example for cancer screening [12] .
* https://www.cdc.gov/coronavirus/2019-ncov/hcp/framework-non-COVIDcare.html. † https://www.cms.gov/files/document/cms-non-emergent-elective-medicalrecommendations.pdf. CDC used data from its National Syndromic Surveillance Program (NSSP) to assess trends in ED visits from week 1, 2019 through week 21, 2020 for three life-threatening health conditions: MI, stroke, and hyperglycemic crisis. NSSP is a collaboration among CDC, federal partners, local and state health departments, and academic and private sector partners to collect, analyze, and share electronic patient encounter data received from emergency departments, urgent and ambulatory care centers, inpatient health care settings, and laboratories for public health action. § NSSP includes ED visits from a subset of hospitals in 47 states (all but Hawaii, South Dakota, and Wyoming) and the District of Columbia, capturing approximately 73% of ED visits nationwide. These analyses were limited to EDs with consistent ≥90% completeness for patient discharge diagnosis to ensure data quality (1, 670 EDs). ¶ The three conditions were defined using the following International Classification of Diseases, Tenth Revision (ICD-10) codes: MI = I21-I22; stroke = I60-I61 (hemorrhagic stroke) or I63 (ischemic stroke); and hyperglycemic crisis = E10.1, E11.1, or E13.1 (diabetic ketoacidosis) or E11.0, E13.0, or E10.65 and E10.69 (hyperosmolar hyperglycemic syndrome). Weekly numbers of ED visits for each of the three conditions were compared for two 10-week periods: January 5-March 14, 2020 (weeks 2-11, prepandemic) and March 15-May 23, 2020 (weeks 12-21, early pandemic). The absolute differences and percentage change in number of visits from pre-to early pandemic periods were tabulated, overall and within agesex strata. Analyses were conducted using SAS (version 9.4; SAS Institute). Trends in number of ED visits for MI and stroke were relatively stable during the first half of 2019, increased slightly in the second half of 2019, and then stabilized during the first few weeks of 2020, remaining stable throughout the prepandemic period (Figure 1 ). The number of ED visits for MI and stroke declined sharply starting at week 10 (corresponding to the week beginning March 1, 2020) and reaching the lowest § https://www.cdc.gov/nssp/index.html. ¶ During weeks 2-21, 2020, an average of 3,504 EDs reported to NSSP. On average, 1,670 EDs (48%) had consistent (≥90%) completeness on patient discharge diagnosis data during this period. level during weeks 13-14 (weeks beginning March 22 for MI and March 29 for stroke), coinciding with the early weeks after the declaration of the COVID-19 national emergency. Since the nadir, ED visits for MI and stroke have gradually increased but remain below prepandemic levels. Compared with the prepandemic period, the number of ED visits during the early pandemic period was 23% lower for MI and 20% lower for stroke (Table) . The number of ED visits for hyperglycemic crisis followed similar, albeit less pronounced, trends to those observed for MI and stroke; the number of ED visits for hyperglycemic crisis was 10% lower during the early pandemic than during the prepandemic period, with the lowest level occurring at week 14. The reduction in visits for all three conditions during the early pandemic was similar in males and females. The relative decline in the number of ED visits between the prepandemic and early pandemic periods was similar across age groups for MI and stroke, whereas the decline in ED visits for hyperglycemic crisis tended to be larger among younger age groups, particularly for females (Table) . The absolute decrease in ED visits for MI was largest among persons aged 65-74 years for both men (2,114-visit decrease) and women (1,459) ( Figure 2 ). The absolute decrease in ED visits for stroke was largest among men aged 65-74 years (1,406-visit decrease) and women aged 75-84 years (1,642). The absolute decrease in ED visits for hyperglycemic crisis was largest in younger adults aged 18-44 years (419-visit decrease for men, 775 for women). In the weeks following the declaration of COVID-19 as a national emergency on March 13, 2020, NSSP identified substantial reductions in numbers of ED visits by males and females in all age groups for three potentially life-threatening conditions: MI (23% decrease), stroke (20%), and hyperglycemic crisis (10%). These estimates are consistent with, but smaller in relative magnitude than, the 42% overall decline in ED visits observed during the early pandemic period (1). The largest absolute differences were observed in adults aged ≥65 years for MI and stroke, and adults aged 18-44 years and persons aged <18 years for hyperglycemic crisis. The substantial reduction in ED visits for these life-threatening conditions might be explained by many pandemic-related factors including fear of exposure to COVID-19, unintended consequences of public health recommendations to minimize nonurgent health care, stay-at-home orders, or other reasons. A short-term decline of this magnitude in the incidence of these conditions is biologically implausible for MI and stroke, especially for older adults, and unlikely for hyperglycemic crisis, and the finding suggests that patients with these conditions either could not access care or were delaying or avoiding seeking care during the early pandemic period. There have been reports of excess mortality during the COVID-19 pandemic wherein deaths not associated with confirmed or probable COVID-19 might have been directly or indirectly attributed to the pandemic.** The striking decline in ED visits for acute life-threatening conditions might partially explain observed excess mortality not associated with COVID-19. Previous studies have also reported significant reductions in hospital admissions for MI and stroke during the COVID-19 pandemic (2) (3) (4) (5) (6) (7) . For example, a study of nine high-volume U.S. cardiac catheterization laboratories found a 38% decrease in activations for heart attacks during March 2020 compared with the 14 months before the pandemic (2). Further, large hospital systems in California, Massachusetts, and New York City have reported 43%-50% reductions in admissions for MI and other acute cardiovascular conditions during the pandemic (3) (4) (5) , and neuroimaging data from approximately 850 U.S. hospitals indicate a 39% reduction in the number of patients who were evaluated for signs of stroke (7) . Decreases in ED visits for hyperglycemic crisis might be less striking because patient recognition of this crisis is typically augmented by home glucose monitoring and not reliant upon symptoms alone, as is the case for MI and stroke. The decrease in visits for hyperglycemic crisis merits further study because there are few published reports on this topic. MI, stroke, and hyperglycemic crisis are common lifethreatening conditions that require urgent attention to reduce associated morbidity and mortality. Heart disease is the leading cause of death, and stroke is the fifth leading cause of death in the United States † † : someone in the United States has a heart attack every 40 seconds, § § and approximately 795,000 persons have a stroke annually. ¶ ¶ Diabetes affects 34 million Americans,*** and uncontrolled hyperglycemia (high blood glucose), can lead to diabetic ketoacidosis or a hyperosmolar hyperglycemic state, life-threatening but preventable metabolic complications of diabetes (8 Sex/Age crisis † † † and understand that immediate medical attention for these acute issues can prevent serious heart or brain damage, metabolic complications of diabetes, or death. The sooner emergency care begins, the better are the chances for survival. Even in the face of the COVID-19 pandemic, emergency care can and should be accessed and provided without delay. The findings in this report are subject to at least five limitations. First, NSSP coverage is not uniform across or within states, and hospitals reporting to NSSP change over time; however, NSSP captures approximately 73% of the ED data analyzable at the national level. Second, conditions were defined using ICD-10 diagnosis codes. Differences in coding practices might exist; however, coding for common conditions, especially the life-threatening conditions described in this report, is likely consistent (9, 10) . Third, NSSP does not capture mortality data, and it is not known whether patients with MI or stroke † † † The five major symptoms of MI, or heart attack, are chest pain or discomfort; feeling weak, light-headed, or faint; pain or discomfort in the jaw, neck, or back; pain or discomfort in one or both arms or shoulders; and shortness of breath. The F.A.S.T acronym is a mnemonic that might help determine whether someone is having a stroke: F = Face: When the person smiles, does one side of the face droop? A = Arms: When the person tries to raise both arms, does one arm drift downward? S = Speech: When the person tries to repeat a simple phrase, is the speech slurred or strange? T = Time: If any of these signs are present, persons should call 9-1-1 right away. Signs of hyperglycemic crisis might include low blood pressure, lethargy, dehydration, a confused or altered mental state attributable to high blood glucose in a person with diabetes. sought treatment elsewhere or died at home. Fourth, despite allowing 2 weeks from the end of week 21 before analyzing the data, the findings from the final weeks might be slightly underestimated because of delayed reporting. Finally, seasonal effects in trends in ED visits might exist; however, a proximal comparison period was best for this analysis to minimize other factors that might have affected trends in disease incidence or health care-seeking behavior between years. Despite these limitations, this study also has important strengths. NSSP is a national surveillance system with automated electronic reporting and the ability to detect and monitor health events in near real time, and this analysis was restricted to hospitals with consistent reporting on patients' diagnoses at discharge to minimize effects of differential reporting. At least one in five expected U.S. ED visits for MI or stroke and one in 10 ED visits for hyperglycemic crisis did not occur during the initial months of the COVID-19 pandemic. Patients might have delayed or avoided seeking care because of fear of COVID-19, unintended consequences of recommendations to stay at home, or other reasons. EDs play a critical role in treating acute conditions that might result in permanent disability or death. Persons experiencing severe chest pain, sudden or partial loss of motor function, altered mental status, signs of extreme hyperglycemia, or other life-threatening issues, should call 9-1-1, irrespective of the COVID-19 pandemic. Clear communication from public health and health care professionals is needed to reinforce the importance of timely emergency care for acute health conditions and to assure the public that EDs are implementing infection prevention and control guidelines § § § to ensure the safety of their patients and health care personnel. All authors have completed and submitted the International Committee of Medical Journal Editors form for disclosure of potential conflicts of interest. No potential conflicts of interest were disclosed. § § § https://www.cdc.gov/coronavirus/2019-ncov/hcp/infection-controlrecommendations.html. What is already known about this topic? National syndromic surveillance data suggest a decline in emergency department (ED) visits during the COVID-19 pandemic. What is added by this report? In the 10 weeks following declaration of the COVID-19 national emergency, ED visits declined 23% for heart attack, 20% for stroke, and 10% for hyperglycemic crisis. What are the implications for public health practice? Persons experiencing chest pain, loss of motor function, altered mental status, or other life-threatening issues should seek immediate emergency care, regardless of the pandemic. Communication from public health and health care professionals should reinforce the importance of timely care for acute health conditions and assure the public that EDs are implementing infection prevention and control guidelines to ensure the safety of patients and health care personnel.
The COVID-19 pandemic that developed in Wuhan, China in late-2019, has presented a stark warning to international financial markets with regards to the exceptional vulnerabilities and fragility that can quickly transpire and disseminate. The first reported case of an individual suffering from COVID-19 can be traced back to 17 November 2019 according to media reports sourced in unpublished Chinese government data. However, the first official identification by international organisations such as the World Heath Organisation (WHO), was on the 31 December 2019. Since these two dates, we have observed a worldwide economic slowdown that has thrust a number of countries into severe recessions, with the probability of a broad economic depression ever increasing. The pandemic does, however, present a unique opportunity to specifically investigate two key questions: 1) how do volatility spillovers and episodes of financial market contagion behave during pandemics; and 2) how did Chinese coronavirus/influenza indices behave with regards to total and directional pairwise volatility spillovers. While investigating the traditional interactions between the Chinese coronavirus and influenza indices, and a number of traditional financial markets during the current pandemic, we specifically test how volatility interactions differed in the aftermath of the coronavirus outbreak. As further developed by Gamba-Santamaria et al. [2017] and Antonakakis et al. [2019] , we build on the framework of Diebold and Yilmaz [2012] and construct volatility spillover indices using a DCC-GARCH t-Copula framework to model the multivariate relationships of volatility among stock, commodity (agriculture, energy and precious metal), foreign exchange and cryptocurrency markets. It is important to understand whether the Chinese influenza indices acted as a true financial barometer based on the depth and scale of the Chinese coronavirus outbreak. Such research is of significant importance should population centres face repeated closures and lock-downs during future attempts to reduce the reproduction rates of COVID-19 and any other pandemics that we might face in the future. Financial crises are found to present a number of notable similarities during their development and expansion across traditional financial assets (Reinhart and Rogoff [2008] , Diebold and Yilmaz [2012] ), most notably through the presence of substantial and significant volatility spillovers. Diebold and Yilmaz [2012] considered whether the identification of such spillovers could provide evidence of an 'early warning system' for emergent crises. Within this context, and considering the severity of the COVID-19 pandemic, such early identification of market stresses, as measured 2 J o u r n a l P r e -p r o o f through an abnormal influenza dynamic in Chinese markets, could have provided substantial and timely warnings about the forthcoming severity of the current worldwide pandemic. Methodological support could have been provided through the use of a volatility spillover measure that is based specifically on forecast error variance decompositions from vector autoregressions. The framework of Diebold and Yilmaz [2012] in particular allows us to use directional volatility spillovers to test for the market effects of Chinese influenza indices on the CSI300 (equity markets), the US/RMB exchange rate (foreign exchange markets), Bitcoin (as a measure of alternative investments in the form of cryptocurrency markets) and commodity markets, as considered through the use of gold, oil and soybean markets. Further informational benefit is provided through the addition of specific indices measuring both coronavirus and face masks. Such indices are developed from the financial performance of corporate entities where their central business practice is based on the R&D, production and sales and products that are directly related to products relating to influenza, coronavirus and the production of face masks. Testing is repeated across different model specifications for added methodological robustness. Figures 1 and 2 present evidence of the number of countries which have been affected to date and the sharp growth in the number of confirmed cases and deaths as reported by the World Health Organization (WHO). The rapid deterioration of international conditions has been largely attributed to a lack of synchronisation of the global response. Evidence suggests that countries with the best response rates made the tough decisions to close borders quite early on, while others have pointed to the fact that the presence of female leaders was a particularly important contributing factor to the success of the response. The decision to close borders and reduce the movement of people has severely impacted a number of economic measures as the spread of coronavirus began to escalate ). While the contagion effects of the pandemic began to take its toll on economic conditions, financial markets responded in a number of unexpected ways. For example, evidence suggests that the price of stocks that were unlucky enough to share name characteristics with 'coronavirus' suffered substantial and significant deterioration in line with the escalation of the pandemic ). The sharp deterioration of economic conditions is presented in both Figures 3 and 4 where we consider the purchasers manufacturers indices (PMI) and stock market performance respectively for China, the US and Europe. It is interesting to note that while China experienced a sharp decline of the PMI in February 2020, it immediately increased to a level above its previous 12 months average in March 2020. The US has yet to present any evidence of forthcoming economic shocks within the PMI data, while data suggests that Europe suffered the same economic shock as China one-month later. When comparing the stock market responses in Figure 4 , we clearly observe the lagged responses in both the US and German stock markets as investors failed to appropriately grasp the scale of the forthcoming economic reverberations inherent in the COVID-19 pandemic. Whereas, in Chinese markets, we can identify two significant periods of decline, the first in mid-January 2020 as the number of confirmed cases sharply increased in China while the Chinese government re-enforced counteractive measures to respond, while after a period of market growth, the Shanghai Stock Exchange fell sharply throughout March 2020 as signals of deep economic repercussions increased along with fears of a revival in the reproductive rates of the pandemic in the form of a second wave. Further, in an incredible sequence of events, primarily due to the collapse in the demand for oil, combined with a number of international geopolitical issues, the price of West Texas Intermediate futures turned negative as increased supply and reduced storage capacity hindered standard market operations, leading to a scenario where an investor would 'receive' in excess of $40 per barrel to buy a May 2020 WTI futures contract for near month delivery. From Figure 5 we can clearly observe evidence of this extreme situation of contango. This scenario presented evidence of some of the sharp, abnormal volatility effects that had been generated within the COVID-19 pandemic. However, the sources and directions of these volatility spillovers can provide rich information for investors and policy-makers alike when preparing for future deterioration in circumstance, or indeed, future pandemics, should they arise. Our results not only show that COVID-19 has had a significant impact on Chinese financial markets, as has been broadly identified across a rapid developing area of research, but in a novel finding to date, this research indicates that COVID-19 has had a substantial effect on the Bitcoin market. This result is specifically identified through the use of the indices relating directly to traditional influenza, coronavirus and face masks, the construction of which is discussed above. Further, each index represents the effects of COVID-19 on Chinese financial markets as measured through real-time investor sentiment. In an expected result, both the coronavirus and influenza indices directly influence the face mask index as measured by directional volatility spillovers. We also identify evidence of strong volatility spillovers from the coronavirus/influenza towards Chinese gold and oil futures markets during the pandemic, presenting evidence of the existence of traditional flight-to-safety channels during this time. However, the existence of both significant and substantial coronavirus/influenza spillovers to the Bitcoin market during the entire sample period analysed is of particular interest. One explanation for this outcome could be sourced in the education schemes that have been developed and progressed by the Chinese Communist Party to educate the population of China about the nature and future of digital currencies 1 . Such government support has provided substantive confidence in these rapidly developing digital technologies. Further analysis identifies that in the very early stages of the COVID-19 outbreak, it became clear that the coronavirus index possessed a far more substantial, pronounced and persistent impact on financial markets than that of the traditional and long-standing influenza index. This presented a signal of the deep-rooted economic, social and geopolitical issues that COVID-19 presented for the global population. Based upon multiple robustness checks, our results remain unchanged. The remainder of this paper is structured as follows: previous literature that guides the development of our selected hypotheses are summarised in Section 2. Section 3 presents a thorough explanation of the wide variety of data used in this analyses, while Section 4 presents a concise overview of the methodologies utilised to analyse the relevant hypotheses. Section 5 specifically investigates the interactions between Chinese financial markets and other traditional financial market asset classes, while Section 6 concludes. Our research focuses directly on the analysis of volatility spillovers during the period surrounding the development of the COVID-19 pandemic, with particular emphasis on the influenza and coronavirus indices and their influence on traditional financial market asset volatility. Diebold and Yilmaz [2012] used a generalised vector autoregressive framework in which forecast-error variance 1 An example of such material is available at: https://mp.weixin.qq.com 5 J o u r n a l P r e -p r o o f decompositions are invariant to the variable ordering, to provide a measure of both the total and directional volatility spillovers. Using data on US stock, bond, foreign exchange and commodities markets, from January 1999 to January 2010, evidence suggested that the strongest volatility spillovers were sourced within stock markets upon other markets after the collapse of Lehman Brothers in September 2008. This research, while providing a significant advance of technique, found its sources in the works of Baillie and Bollerslev [1991] , Susmel and Engle [1994] and Koutmos and Booth [1995] , who had examined multi-frequency and cross-market spillovers in foreign exchange and equity markets respectively. Regional volatility has since been examined in Japan (Kim and Rhee [1997] ; Ng [2000] ); the United States (Tse [1999] ); South Korea (Pyun et al. [2000] ); Hong Kong (Gannon [2005] ); Europe (Baele [2005] , Fernandez-Rodriguez et al. [2015] ); Taiwan (Lin [2006] ); Canada (Krause and Tse [2013] ), the United Kingdom (Antonakakis et al. [2016] ) and in consideration of the BRICs in its entirety (Bekiros [2014] ; Syriopoulos et al. [2015] ; Mensi et al. [2016] ). Further, Balli et al. [2015] identified significant spillover effects from developed markets to emerging markets. Within our selected research methodology, we consider multiple traditional financial assets to test for the presence of volatility transmission and spillovers. To date, a number of effects have been identified with regards to the gold and cryptocurrency-based contagion effects sourced within the COVID-19 outbreak ); and side-effects relating to name association ). Specifically, their selection process was based on the findings of previous work establishing the existence of theoretically robust interlinkages. Here we consider the relationships between the performance of companies with influenza and coronavirus-dependant business structures and the broad equity index as represented by the CSI300, based on the prior works of Baele [2005] , Yilmaz [2009], Singh et al. [2010] , Erdogan et al. [2013] , Yarovaya et al. [2016] , Smimou and Khallouli [2016] , Bhuyan et al. [2016] and Shahzad et al. [2017] . Further, we consider interlinkages between COVID-19 and commodities as represented by gold, oil and soybeans. previously identified substantial contagion effects sourced from the recent coronavirus outbreak upon both gold and oil markets. The addition of soybeans as a central commodity was based on China being the world's largest importer ($39.6 billion USD in 2017, representing twothirds of the world's demand), with usage being driven by the product's importance as food for animals being bred for the growing demand for protein, but also as a replacement for traditional carbohydrates such as pasta and rice. Further support can be sourced from the works of Ham-6 J o u r n a l P r e -p r o o f moudeh et al. [2004] , Malik and Hammoudeh [2007] , Bhar et al. [2008] , Malik and Ewing [2009] , Du et al. [2011] , Sadorsky [2012] , Antonakakis and Kizys [2015] , Kumar [2017] , Bouri et al. [2017] and Lau et al. [2017] . Spillovers, persistence and the effects of volatility on real economic data and energy markets are other research fields that provide particular value to our work (as per Greasley and Oxley [1999] , Oxley [2002] , Fatai et al. [2004] , Ma et al. [2008 Ma et al. [ , 2009 and Zhao et al. [2010] ). Another important channel through which contagion can spillover is foreign exchange markets, which have been found to be the case in Hong [2001] , Do et al. [2016] , with further evidence provided by the intra-day seasonality of transacted limit and market orders in the DEM/USD markets (Cotter and Dowd [2010] ). Finally, we test for the presence of spillovers and the growing cryptocurrency sector, specifically Bitcoin, with previous works establishing contagion channels through multiple avenues as the product has matured since its initial development in 2009 (Yi et ). The transmission effects and spillover of fear is also found to be a central topic in guiding our research methodology and design. Tsai [2014] found that the US stock markets shows three periods during which its net spillover effect exceeds zero: the period prior to 1997, the dot-com bubble from where the developed fear index correlates significantly with the spillover of the US stock market into other markets. Hameed et al. [2010] documented inter-industry spillover effects in liquidity, which are likely to have been sourced in binding capital constraints, but followed periods of substantial decline in market valuation. Such results echo that of Lee and Rui [2002] who had found that there exists a positive feedback relationship between trading volume and return volatility in New York, Tokyo and London-based stock returns, where US financial market variables were found to contain extensive predictive power for UK and Japanese financial market variables. Other specific uses of spillover methodologies have incorporated research based on quantitative easing effects ), terrorist attacks and aviation disasters ; ; Akyildirim et al. [2020] ). Much of this work has been guided by research relating to the direct effects of volatility transmission and contagion as measured through dynamic correlation analyses (Syllignakis and Kouretas [2011] ) and broad behaviour through extreme financial market events. A variety of behavioural differences have already been identified, such as that found in the work of Bialkowski et al. [2006] who identified that spillovers from the US stock market to the UK, Japanese and German markets are more frequent when the latter markets are in a crisis, or an extreme volatility regime, while Liu [2014] found that extreme downside movements of the S&P 500 and Nikkei 225 are significantly predictive for the likelihood of extreme downside movements in all the investigated Asia-Pacific markets. The 2007 financial market crises was identified as a particularly significant outlier event driving substantial contagion and spillover effects around the world (Longstaff [2010] , Kim et al. [2015] , Xu et al. [2018] ). While considering our methodological selection, we also considered much research that provides preemptive structural warnings with regards to broad issues surrounding simplistic methodological errors. We considered the work of Bae et al. [2003] to be particularly important, when identifying that contagion is predictable and depends on regional interest rates, exchange rate changes, and conditional stock return volatility, while Corsetti et al. [2005] found that the result of 'no contagion, only interdependence' stressed by recent contributions is due to arbitrary and unrealistic restrictions on the variance of country-specific shocks. Specifically, Pesaran and Pick [2007] sheds doubt on the general validity of the correlation-based tests of contagion recently proposed in the literature which do not involve any market specific variables. We obtain the data from the WIND database, which is the largest financial data provider in China. 2 In this paper, we use the 5-minute high frequency data for empirical analysis to best attempt to capture the volatility dynamics of different financial assets during the COVID-19 pandemic. Variables included in the analysis are: the coronavirus index, influenza index, face mask index, CSI 300 index, gold futures price, oil futures price, soybean futures price and US Dollar/RMB spot exchange rate, and Bitcoin (BTC) price. A time series plot of these series as originally measured is provided in Figure 6 . A number of both sharp and significant price movements are identifiable. Particularly protracted declines in the value of the CSI300, gold and oil are evident within March 2020, which is also evident in the volatility of returns, which are computed by taking first differences of the natural logarithms for each index or price, and are presented in Figure 7 . During the same period of time, there are episodes of exceptional volatility in Bitcoin, where the price falls from over 2 The WIND database is widely used for research topics related to China studies, for example see Liu et al. [2019] and Allen et al. [2019] . J o u r n a l P r e -p r o o f $9,000 to almost $4,000 in two distinct and sharp declines. While such pronounced behaviour in equity and commodity markets can be attributed directly to events relating to the severity of the COVID-19 pandemic, the Bitcoin price collapse is somewhat unique. One possible reason for this movement has been attributed to the behaviour of leveraged traders (up to 100:1 leverage has been provided), as evidenced in substantially elevated trading volumes (almost $1 billion dollars) and opaque pricing behaviour on the BitMEX exchange, which has been long-associated with the provision of leveraged-cryptocurrency trading 3 . The descriptive statistics for each logarithmic return series are presented in Table 1 , adding further statistical support to the exceptional volatility of Bitcoin in comparison to other selection variables, with pronounced negative skewness and elevated kurtosis, with similar characteristics are also presented in the soybean market. In this paper, we specifically select three important concept-based indices, denoted as the coronavirus index, the influenza index and the face mask index from the Wind database for our empirical analysis, as these indices allow us to explore market investors' reactions to the COVID-19 outbreak 4 . The Wind database develops a number of concept-based stock price index for Chinese market. The Wind Concept Index is a group of indices based on market hot spots, related topics, and capital market needs to meet specific concepts. The Wind Concept Index is an equal-weighted index that consisting of only A-share stocks in the Shanghai Stock Exchange (SSE) and Shenzhen Stock Exchange (SZSE). Based on the theme of the concept index, Wind database screens the stocks for the relevant industrial chain links and characteristics, taking into account of the turnover, market quotation, and the transaction characteristics such as linkage to determine the final sample 5 . Both SSE and SZSE are open five days per week for 4 hours per day: 9:30am-11:30am (morning session) and 1:00pm-3:00pm (afternoon session) local time. Hence, all three COVID-19 related indexes are only available during market trading hours. Insert Table 1 about here The availability of the face mask index determines the starting date for our analysis, as it only becomes available on 05 February 2020. Hence, our sample data runs from 05 February 2020 to 12 May 2020 at the 5-minute frequency with 1982 observations 6 . In each day, we select the data when the two Chinese stock exchanges are open during the two trading sessions (e.g., 9:30am-11:30am and 1:00pm-3:00pm). To align with the face mask index, we then obtain all the data for the same period each day at the local time. The coronavirus index is used as the main proxy to measures the effects of COVID-19 virus in financial markets. The coronavirus index is created using an equal weight approach including 110 publicly listed Chinese companies that are heavily involved in producing diagnostic reagents, vaccines, antibiotics, antivirals, and masks related to pneumonia. 7 The Coronavirus index shows how investors pursue the investment opportunities in the related companies. The influenza index is also considered as an alternative proxy to measure of the COVID-19 virus from a Chinese financial market perspective. The influenza virus is widely known as an acute respiratory infection, which is highly contagious and spreads quickly. Wind database creates a special concept-based stock price index-the influenza index, which comprises 35 listed A-share companies from that are involved in producing involve cold medicines, vaccines, R&D and manufacturing, to track the performance of the related companies 8 . Face masks have experienced an unprecedented surge in global demand since the outbreak of COVID-19. We consider this special type of product in our analysis as it plays a critical role in preventing respiratory infection. Many factories that usually made other products switched to making masks to meet the unprecedented demand with the coronavirus outbreak in China. We use the face mask index (also known as Antiseptic Gauze index) that includes 37 listed Chinese companies in the field of producing face masks and raw materials from SSE and SZSE. The face mask index simply provides a measure of the performance for the face masks producing industry after the outbreak of the COVID-19. 6 We exclude the missing values during the sample period. In addition, the Chinese stock markets closed for Lunar New Year break from 24 January 2020 to 03 February 2020. 7 The 5 minute coronavirus index is first available on 23 January 2020, and than the Chinese stock markets close for Lunar New Year break until 03 February 2020. 8 The 5-minute influenza index is available from 02 November 2017. [2012] using a DCC-GARCH framework to model the multivariate relationships of volatility among assets. In their paper, the volatility is computed directly from the covariance matrix obtained from the DCC-GARCH model of Engle [2002] , which has the advantage of avoiding the need to define the volatility. investigate the volatility spillovers and co-movements between WTI and major oil & gas firms stock prices using the spillover index approach of Diebold [2019] to use a combination of the DCC-GARCH type framework and a TVP-VAR approach of Antonakakis and Gabauer [2017] to investigate the dynamic volatility spillovers among a number of Chinese financial markets as well as Bitcoin. Antonakakis and Gabauer [2017] extend the spillover index approach of Diebold and Yilmaz [2012] by allowing the variances to vary via a stochastic volatility Kalman Filter estimation approach to explore the transmission mechanism in a time-varying fashion. A TVP-VAR(1) model with timevarying volatility can be written as follows, where y t and z t−1 = [y t−1 , . . . , y t−p ] represent N × 1 and N p × 1 dimentional vectors. β t is a N × N p time-varying coefficient matrix and t is N × 1 error disturbance vector and timevarying variance-covariance matrix of S t . vec(β), vec(β) and v t are N 2 p × 1 dimensional vectors and R t is an N 2 p × N 2 p dimensional matrix. The time-varying coefficients of the vector moving average (VMA) is the fundamental of the connectedness index introduced by Diebold and Yilmaz [2012] using the generalized impulse response function (GIRF) and the generalized forecast error variance decomposition (GFEVD) developed by Koop et al. [1996] and Pesaran and Shin [1998] . The GFEVD can be interpreted as the variance share one variable has on others and it can be calculated as follows:φ whereφ g ij,t (J) denotes the J-step ahead GFEVD. Using the GFEVD, variable i transmits its shock to all other variables j, representing the total connectedness index of the network by: The spillovers of all variables i to variable j, known as the total directional connectedness to others, is defined as : Similarly, the spillovers of all variables j to variable i (or the directional connectedness variable i receives it from variables j), known as the total directional connectedness from others, is defined as: The net total directional connectedness (C g i,t ) is calculated using the total directional connectedness (C g i→j,t (J)) to others minus total directional connectedness from others (C g i←j,t (J)). The sign of the net total directional connectedness illustrates whether a variable i is driving the network (C g i,t > 0) or driven by the network (C g i,t < 0). The net pairwise directional connectedness is calculated in the bidirectional relationships: The net pairwise volatility spillovers between markets i and j is simply the difference between the gross volatility shocks transmitted from variable i to variable j and those transmitted from variable j to variable i. J o u r n a l P r e -p r o o f Copula functions can be used for modelling correlated random variables. Let X i be a random variable with a marginal distribution F i for i = 1, 2, . . . , n. As Sklar [1973] shows, each distribution function F (x 1 , . . . , x n ) can be represented as its marginal distribution by using a copula such as An n-dimensional copula C determined in [0, 1] n can be written as for ∀u i ∈ [0, 1], i = 1, . . . , N. According to Patton [2006] , copulas can be based on conditional distributions for estimating a DCC-GARCH t-copula model: where F −1 X1 (u 1 |• 1 ) represents the conditional distribution and • 1 represents the estimated parameters of the univariate GARCH model. The DCC model is applied to study the time-varying correlations of asset returns. The time-varying variance-covariance matrix H t is defined as: where D t = diag( √ h11, t, . . . , √ nn, t) is a diagonal matrix of square root conditional variances. R t is the dynamic conditional correlations based on the standardised residuals' conditional variancecovariances, Q t , that are followed a GARCH(1,1) model of Engle [2002] : where a and b are positive scalar parameters satisfying a + b < 1 to ensure stationarity. The J o u r n a l P r e -p r o o f DCC model is estimated under a multivariate Student-t distribution. The multivariate Student-t distribution is applied as the normality assumption of the innovations is rejected for each volatility series. We first investigate how the coronavirus index affected Chinese stock, commodity, and foreign exchange markets, together with the largest cryptocurrency, Bitcoin, using the 5-minute return data. The structure of our selected methodologies and data-frequencies used to test our proposed hypotheses, along with related robustness tests are outlined in A substantial slow-down in economic activity has lead to a turbulent global financial market during COVID-19, primarily driven by the implementation of physical lock-downs across the world. We graphically examine the dynamic total connectedness plot as presented in Figure 8 , where the spillover index changes over time. The general trend of the dynamic total connectedness index is slightly increasing. We find that the volatility spillovers first reach a peak in early February 2020, indicating that the initial outbreak of COVID-19 had quite a strong impact on different Chinese financial markets as a result of a nation-wide reaction taken to contain the rapid spread of COVID-19. The spillovers are found to have deceased slightly during mid-and late-February. 9 We also check for the sensitivity of all the empirical results to the choice of vector autoregressions order and forecast error horizons. Our results remain unchanged. The most interesting finding of the total connectedness plot refers to the recent decline in oil and Bitcoin prices. The spillover index begins to increase sharply in early March and reaches a peak between 08 March and 13 March as geopolitical issues between Saudi Arabia and Russia began to influence to price of oil, and the crash of Bitcoin prices from over $9,000 to $4,000 between March 12 and 13. The geopolitical issues that sparked the volatility observed in oil prices began during the break-down in dialogue between key oil producers over proposed oil-production cuts in response to decreased demand due to the outbreak of COVID-19 pandemic. Calculated volatility spillovers decline temporarily after mid-March and remain quite stable until early April. US WTI crude oil futures for May plunged below zero on 20 April for the first time in recorded history. In late-April, the dynamic total connectedness index increases dramatically again when the global financial markets face more turmoil. During the current pandemic, the dynamic total connectedness index changes as a combination of factors, including the coronavirus outbreak, political instability in oil markets, and the crash of Bitcoin prices, generated substantial fear across multiple financial markets. Insert Figure 9 about here The dynamic net directional connectedness index is calculated as the difference between gross volatility shocks transmitted to and those received from all other markets and such an index is shown as Figure 9 . There are three interesting findings. First, we pay special attention to the dynamic net connectedness of the coronavirus index as presented in Figure 9a . The dynamic net volatility spillover of coronavirus is positive during the entire sample period. The net directional connectedness of coronavirus is defined as the volatility shocks transmitted from the coronavirus index to different financial assets minus the volatility shocks received by the coronavirus index from all other assets. Our results, therefore, show that volatility spillovers from the coronavirus index to all other financial assets play a dominant role. In other words, the coronavirus index is a volatility transmitter. Second, the net volatility spillover of coronavirus is found to be decreasing throughout the analysed period. The nationwide lock-downs imposed by the Chinese government were found to be both effective and successful while attempting to slow down the rapid spread of COVID-19. There are several major episodes of net volatility spillovers taking place from the coronavirus index to other indices and markets as shown in Figure 9 . Thirdly, we observe that the selected face mask index, the price of gold, oil, and that of Bitcoin are all found to be net volatility receivers as they transmit less risks than they receive from other markets. Insert Figure 10 about here The dynamic net pairwise directional connectedness between the coronavirus index and other financial markets is shown as Figure 10 , where the estimate of pairwise spillover index is shown on the y axis for every pair considered. A number of interesting empirical findings are noted as follows. The volatility shocks in the coronavirus index spill-over mostly to the face mask index, CSI300 index, gold, oil, and Bitcoin price as the corresponding net pairwise connectedness indices from the coronavirus index to each of the index or asset price are positive during the sample period. A positive pairwise connectedness would indicate that a volatility shock transmitted from the coronavirus index to the other market was larger than in the opposite direction. Therefore, a positive pairwise connectedness index implies that the coronavirus influence an index or asset price. There are found to exist strong volatility spillovers from the coronavirus index to the face mask index during the entire sample for our analysis as the net pairwise directional connectedness index remains positive. While the influenza index is found to generate small effects, the coronavirus index is found to be a substantial volatility transmitter upon companies that manufacture face masks due to government enforcement and legislative procedures introduced for the safety of the population. The widespread usage of disposable masks has been shown to be effective in controlling the incidence rate of COVID-19 infections. For example, China, Japan, and South Korea have all been presented as strong examples of states that have promoted widespread usage of face masks together with intensive strategies thorough testing, effective contact tracing, mandated social distancing, all of which have been found to contain risk. Some European countries also require their residents to wear face masks, for example, Austria, the Czech Republic and Slovakia. The phenomenon of volatility spillover from the coronavirus to face mask indices clearly demonstrate that the spread of COVID-19 has dramatically increased the demand of face masks worldwide. There is clear evidence to support the view that the coronavirus index impacts the top 300 stocks traded on Chinese stock markets. In Figure 10 , we observe strong volatility spillovers from the coronavirus index to the CSI 300 index as the dynamic net pairwise directional connectedness index is generally positive. Here we provide a meaningful explanation why the volatility was trans-mitted from the coronavirus index to the CSI300 stock index. Since the breakout of COVID-19, many CSI300 listed companies have made great efforts to fight the virus by rapidly shifting their production lines from cars and jet engines to face masks. Sinopec, also known as known as China Petroleum & Chemical Corp., obtained mask-making equipment by setting up eleven production lines. Gree, as the world's largest air conditioner maker, started to manufacture up to two million N95 surgical masks and surgical masks daily. Furthermore, the Midea Group is a Chinese electrical appliance manufacturer also listed on the Fortune Global 500. The company created a new production line that can make 200,000 face masks per day, while BYD, which is China's biggest electric vehicle maker, has built a plant in Shenzhen to produce five million masks and 300,000 bottles of disinfectants per day to tackle the growing COVID-19 outbreak. BYD is now becoming one of the world's largest face mask producers. A similar pattern is observed for the relationship between coronavirus and gold markets. During the current pandemic, gold has received more volatility shocks from the coronavirus index than it transmits. Our findings support the flight-to-safety phenomenon from coronavirus related shocks to the gold market as there is an increasing demand for gold during the turmoil. There will be a shift to investing in safe-haven assets like gold from risky assets where gold is traditionally regarded as a safe-haven (see Baur and Lucey [2010] ). When focusing on the relationship between coronavirus and oil, the dynamic net pairwise directional connectedness index is positive during the sample period. Our results indicate that the volatility shocks during the outbreak of the coronavirus pandemic has directly and significantly influenced the oil market. Oil demand in China fell substantially due to the contraction in oil consumption, and major disruptions to global travel and reduced economic activity. When focusing on the dynamic net pairwise directional connectedness from the coronavirus index to soybean futures prices, volatility spillovers reach a maximum value in early February and then gradually decline, yet remain consistently positive at the end of March. Between late March and mid-April, the pairwise connectedness index is negative, indicating soybean influences the coronavirus. However, the spillover index recovers quickly to a positive value. Overall, the outbreak of COVID-19 has quite strong impacts on the market for soybeans. There is clear evidence that the coronavirus index effects the most important agricultural futures market in China, where the volatility shocks in the coronavirus transmit to soybean. The volatility spillover results coincide with strong demand for food, together with the disruption to food supply chains amid the pandemic, with panic-buying and stockpiling behaviour of the population during the initial panic at the start of the pandemic. The effects on exchange rates are not found to be significant throughout the period analysed. In a particularly interestingly observation, we also identify strong evidence that the coronavirus index significantly influences the volatility of Bitcoin during the pandemic, as the dynamic net pairwise directional connectedness index is found to be consistently positive. The coronavirus is a volatility transmitter to Bitcoin. Such findings may help to explain Bitcoin's highly volatile nature during the pandemic. On 12 March 2020, the price of Bitcoin fell almost 50% in one day, largely attributed to significant market issues on the BitMEX exchange, which has been long associated with leveraged traders. Even considering the presence of this abnormal price decline, in the days thereafter the collapse of Bitcoin prices from over $9,000 to almost $4,000, were followed by the price of Bitcoin increasing sharply, reaching $10,000 in May 2020. The identification of pairwise directional connectedness between the coronavirus index and Bitcoin is a novel findings to date. This result specifically supports the presence of a flight-to-safety phenomenon in Bitcoin markets during the market turbulence as Bitcoin acts as a safe haven for Chinese investors. Due to a number of substantial geopolitical issues and elevated prices in both agricultural, energy and precious metal commodities, which are found to present historical evidence of flight-to-safety behaviour, the sharp collapse in Bitcoin prices presented quite a timely opportunity for some investors to enter this cryptocurrency market at prices substantially below multi-year averages. Insert Table 3 We also present the maxima, minima, means and standard deviations of the corresponding pairwise spillover estimates in Table 3 for each pair. From this table, we find that the volatility spillover effects from the coronavirus index to the face mask index are greater than the other asset prices followed by gold, oil and Bitcoin markets. It is interesting to note that the coronavirus related shocks transmit volatility shocks to the three important commodities-gold, oil and Bitcoin. Overall, we find overwhelming evidence that the coronavirus significantly and substantially effected Chinese financial markets and Bitcoin during the pandemic. Compared with Section 5.1, we are particularly interested in exploring how the traditional influenza index effects stock, commodity, foreign exchange and cryptocurrency markets using the 5-minute data for the same sample period. The influenza index is selected as an alternative proxy measure for the COVID-19 virus in this paper, focusing on companies that would have been particularly central to the traditional seasons of influenza that would have been expected prior to the outbreak of COVID-19. Therefore, an eight-variable VAR model is estimated including the influenza index, face mask index, CSI 300 index, gold futures price, oil futures price, soybean futures prices, US Dollar/RMB exchange rate and Bitcoin spot prices. Insert Figure 11 about here As we can see from Figure 11 , the dynamic total connectedness index is time-varying. The dynamic total connectedness index has an increasing trend and it reaches a peak when the recent oil price war started on 08 March 2020. The total volatility spillover shows an increase in range volatility interdependence between the markets after April. The combination of COVID-19 with the recent oil price war certainly lead to the increased financial volatility, at least for Chinese markets. Insert Figure 12 about here The dynamic net directional connectedness index, which shows the difference between gross volatility shocks transmitted to and those received from all other markets, is shown as Figure 12 . As we can see, the net directional connectedness index for influenza and CSI300 are volatility transmitters. The dynamic net directional connectedness index for influenza indicates how much each market contribute to the influenza in net terms from a time-varying perspective. The dynamic net spillover index for influenza is presented as Figure 12a . It is clear that volatility spillovers from influenza index to other considered assets, outweigh the directional spillover effects to other investment assets in the transmission process. This provides evidence that the influenza index is a volatility transmitter during the current pandemic. The volatility spillovers first peak in early The pairwise connectedness index for the influenza and CSI300 pair is found to be mostly negative, indicating strong volatility spillovers towards the influenza index from the CSI300 index. The influenza index is found to be influenced by the CSI300 index and the influenza index is, therefore, a net volatility receiver from the CIS300 index, where these results are in contrast to those observed in a similar analysis of the coronavirus index, which is found to have had stronger impacts on the top 300 traded stocks in Chinese stock markets. Similar to the coronavirus, the influenza index influences both gold and oil risks, but not on a scale observed during the recent pandemic. Furthermore, the relationship between the influenza index and soybeans is found to be similar to that experienced during the coronavirus pandemic, specifically at points in time where there was evidence of panic-buying in Chinese markets. Within these results, one of the key findings remains that of positive directional spillovers on Bitcoin, presenting more evidence of safe-haven behaviour. Insert Table 4 about here Table 4 also shows the maxima, minima, mean and standard deviations of the individual dynamic pairwise spillover estimates. We find that most volatility shocks in the influenza index spill over to 21 J o u r n a l P r e -p r o o f face masks, gold, oil and Bitcoin if we compare the mean estimates of dynamic pairwise spillovers. From this, we find that the influenza index significantly impacts upon the risks of several Chinese financial markets by transmitting directional volatility shocks, for example, to that of gold, oil, soybean, and most unexpectedly, Bitcoin. Such a result provides methodological robustness towards the results provided within our earlier provided coronavirus index analysis. It is important to next test as to what differential effects can be uncovered through analysis of the differentials between the coronavirus indices and that of the influenza indices, that is, what specific differentials in market effects, as measured through investor sentiment, has the outbreak of the COVID-19 pandemic generated when compared to traditional Chinese-sources influenza outbreaks? Section 5.1 and Section 5.2 investigate the volatility spillover between the coronavirus/influenza indices and key financial markets, and the corresponding results from these two sections show that risks within both coronavirus and influenza indices are found to influence the financial risks of several markets. In this section, we, therefore, compare the net pairwise directional connectedness of the coronavirus and influenza indices on the same financial market by using the pairwise spillover of coronavirus (as presented in Figure 10 ) less the pairwise spillover of influenza (as presented in Figure 13 ). The resulting difference between the two pairwise spillover indices for a particular market is provided in Figure 14 . A positive (negative) value in the differences of the two spillover indices indicates that the coronavirus (influenza) has more substantial impact on the volatility transmissions in a net form. We can therefore explore as to whether the coronavirus index or influenza index has played a more important role in the volatility transmission process towards our selected financial markets. An inspection of Figure 14 shows that the coronavirus index influences the risk to financial markets more than influenza index as the differences in the pairwise spillover indices are generally positive. The volatility transmission to the group of markets considered has been larger from the coronavirus index than from the influenza index. Insert Figure 14 about here Considering the particular markets, we first look at the differences in the pairwise volatility spillovers to the face masks as depicted in Figure 14a , where the differences between two pairwise 22 J o u r n a l P r e -p r o o f spillover indices are positive during the entire sample period. This suggests that the coronavirus index contributes more volatility shocks to the face mask industry than those of the influenza index during the pandemic. It should be noted that both coronavirus and influenza indices are volatility transmitters to the face mask index. The differences between the two net pairwise spillover indices to the CSI300 stock market implies that the coronavirus index influences more of the top 300 traded stocks in China, and contributes more volatility shocks to the CSI300 index than that of the traditional, expected influenza season. As earlier identified, the influenza index is a net volatility receiver to the CSI300 index while the coronavirus index is a volatility transmitter. The markets for both gold and oil are of particular interest during the pandemic. The coronavirus pandemic dominates the volatility transmission process to both the gold and oil market where the coronavirus index has a larger effect than the influenza index. This result further confirms that the coronavirus had a significant impact on on the Chinese oil futures market with demand falling sharply during the lock-down that quickly followed. Positive differentials in the calculated relationships between both indices and Bitcoin, provide further evidence of volatility transfer from the pandemic to the developing digital currency, supporting the view that there existed a flightto-safety during the the development of the epidemic as investors attempted to quantify the true nature of the risks that they were confronted. Using an eight-variable VAR model, we explored the volatility spillovers between coronavirus and other markets in section 5.1, and considered a similar model for the influenza index in section 5.2. The corresponding results suggest that both the coronavirus and influenza indices transmit volatility shocks to several important commodity markets including gold, oil, Bitcoin. In this section, a robustness check is undertaken by considering a nine-variable VAR model to investigate the potential contagion of shocks among financial markets. For this nine-variable model, we include the coronavirus, influenza, and face mask indices together with other financial markets. Insert Figure 15 about here We first present the net pairwise directional connectedness between coronavirus and other mar-kets in Figure 15 . 10 Several notable findings are presented as follows. Firstly, the pairwise spillover indices between coronavirus and individual financial market are positive during the whole sample period except for the soybean and foreign exchange rate markets. Secondly, Figure 15a Insert Figure 16 about here The net pairwise directional connectedness between influenza and other markets is shown in Figure 16 . We observe that the net pairwise directional connectedness indices are positive during the entire sample period for face mask, CSI300, gold, oil and Bitcoin. In these cases, the volatility spillovers from the influenza index to each of these market are higher than the volatility spillovers from each financial market to influenza. In particular, Figure 16b illustrates a different story under a nine-variable model for the CSI300 index. As previously shown in Figure 13b using an eightvariable model,the pairwise connectedness index for influenza-CSI300 pair is mostly negative. For soybean, the pairwise connectedness index remains mostly positive. Overall, we find that most results from Figure 15 and Figure 16 coincide with those of results in Section 5.1 and Section 5.2. Both the coronavirus and influenza indices effect several financial markets in China together with the largest cryptocurrency market. There exists strong interlinkages among COVID-19 related indices and the other six financial markets. In addition, we can also compare the volatility shocks of coronavirus and influenza to to other financial markets by using the appropriate pairwise spillover of coronavirus in Figure 15 less the pairwise spillover of influenza in Figure 16 . We present the difference between the two pairwise spillover indices for a particular market in Figure 17 . Insert Figure 17 about here For example, we use the estimate from Figure 15b minus Figure 16a to see the differences of the volatility shocks of coronavirus and influenza to the face mask market. A positive (negative) value in the differences of two spillover indices implies that the coronavirus (influenza) has more impact on the volatility transmission. As can be seen from Figure 17 , the coronavirus index contributes the most volatility shocks for a wide range of financial markets compared to the influenza index, which is consistent with the conclusion drawn in Section 5.3 using an eight-variable model. This result further strengthens our aforementioned findings that coronavirus dominates in the volatility transmission process. Based on a broad range of carefully selected data, the identification of sentiment-induced, significant directional volatility spillovers from companies with primary business interests in the development of face masks and those involved at the front-line of the battle against the COVID-19 pandemic, along with a number of behavioural observations relating to investors has been shown to be particularly important. The COVID-19 pandemic has the properties of a black-swan event, where the rarity of the international shock can be observed in the lack of preparation evident in the statements of international corporate entities, who were caught unawares by the scale and speed of the outbreak. Focusing on the Chinese epicentre of the pandemic presents a number of very interesting observations. Prior to the identification of the 'mystery pneumonia' in Wuhan, evidence suggests that there were no prior related media announcements before 17 November 2019. Some news sources have stated the existence of social media posts and anecdotal evidence prior to this date. However, it is clear that the severity of the forthcoming pandemic was broadly missed by the international investment community. But even after considering the presence of COVID-19 in it's relative infancy in late 2019, evidence suggests that international financial markets did not consider the pandemic to be threatening. It is in January and February 2020 before the associated economic concerns generate panic throughout financial markets, similar in nature to the March 2008 collapse 25 J o u r n a l P r e -p r o o f of Bear Sterns, when investors were faced with an opaque and unique inter-generational crisis. Evidence provided in this research suggests that Chinese investors responded in much the same manner as investors throughout a number of historical crises, by seeking short-term security in commodity markets. While evidence regarding shifting investment to soybean markets does not appear to be consistent throughout the period of analysis, directional spillovers of volatility are evident, and consistently flowing to both the markets for Chinese gold and oil futures. The former connection can be explained by a long-standing traditional safe-haven in gold markets, however, the latter can also deemed to be consistent with the realisation that an economic slowdown in China would lead to substantial collapse in demand for oil, where along with a number geopolitical episodes, is further reflected in the subsequent collapse of Brent crude and in a more extreme manner, West Texas Intermediate. What is most interesting within these new results is that there is a significant relationship between the outbreak of the COVID-19 pandemic and directional volatility spillovers into the Bitcoin market. While digital currencies continue to develop at pace, Bitcoin remains synonymous with the market leading cryptocurrency in terms of both market capitalisation and trading liquidity. The Chinese government have, for a number of years, remained quite optimistic about the role that blockchain could have with regards to economic development and control over the broad technology, however, their stance with regards to cryptocurrencies has been quite distantly cautious. The Chinese state, in recent times, has censored posts against blockchain, however, the Chinese central bank stated its preference to instead restrict cryptocurrency trading. It would appear that dilution of control of monetary policy could be one reason as to why the Chinese government has taken this stance, which is quite reasonable due to the wide variety of issues that remain within the structure of Bitcoin and other related cryptocurrencies. Such digital assets also possess the threat of disrupting traditional Chinese monetary flows and economic platforms, not to mention transactions that flow outside sovereign and state control and the potential for investment losses and over-inflating bubbles. Headwinds have been observed in the natural advantage that the Chinese population have possessed in the creation of cryptocurrencies, primarily, the country's advantage with regards to technological innovation, and secondly, a close proximity to nations with cheap sources of power. Evidence of Chinese fuelled rallies in Bitcoin were as recent as October 2019, when the price of Bitcoin exceeded $10,000 when President Xi Jinping stated that Beijing will increase investment in blockchain technology. An official within the China's central bank also stated that the blockchain technology could potentially offer help with the risk control of commercial banks and potentially increase lending opportunities for smaller domestic businesses. On the same day as this statement, technological shares relating to blockchain and cryptocurrency increased substantially to daily price limits in both Shanghai and Shenzhen. As the economic crisis associated with the COVID-19 pandemic continued to advance there was an observed softening of the firm stance that the Chinese government had taken with cryptocurrency. In late 2019 and throughout early 2020, it was reported by multiple media sources that the People's Bank of China had launched a cryptocurrency called the Digital Currency Electronic Payment (DCEP), which had been primarily the work of the largest four state banks and three telecoms companies within a stage-by-stage rollout. Central bank officials explained that the CBDC will use a two-tier system where both the central bank and these stated financial institutions will be legitimate issuers. The timing of the announcement of DCEP is of broad interest. It is widely accepted that cryptocurrencies such as Bitcoin or Ethereum stand in contravention to central banks and legacy financial institutions. Some governments (including China) subsequently blocked cryptocurrencies and proceeded to ban initial coin offerings (ICOs) and exchanges from trading from servers in China or using the Chinese Yuan. Prior to the introduction of these regulations, over 70% of the world's Bitcoin were mined in China ). The sequence and timing of such events appear to have generated an environment of support in which Chinese investor confidence in broad cryptocurrency, and in particular that of Bitcoin increased. In the face of the incredible level of adversity facing Chinese investors, it would appear that both the sequence and timing of these events created an environment in which Bitcoin offered a safe-haven to investors as both equity and traditional commodity markets attempted to identify the scale of the COVID-19 crisis. While international financial markets attempted to rapidly quantify and forecast perceptions of risk and financial loss associated with the official WHO announcement of the international outbreak of the COVID-19 pandemic in late-2019, it became quickly evident that investors were faced with quite unique economic, geopolitical and social challenges. Chinese financial markets were the first to respond due to the initiation of the COVID-19 pandemic in Wuhan and the subsequent lock-downs of entire cities that quickly ensued. It was at this point that the wider international community understood the true nature of the issue to which it was confronted. Due to the elevated probability of recurrence of pandemics in the future due to the nature of the COVID-19 outbreak, it is important to understand the behaviour of investors in the aftermath of such events. One particularly novel way to investigate such behaviour is through the use of a number of related indices. We specifically identify such effects through the use of the indices relating directly to traditional influenza, coronavirus and face masks, built upon corporate entities where their central business practice is in the R&D, production, sales and distribution of goods and services related to each sector. Furthermore, each index represents the effects of COVID-19 on Chinese financial markets as measured through real-time investor sentiment. Further analysis identifies that in the very early stages of the COVID-19 outbreak, it became abundantly clear that the coronavirus index possessed a far more substantial, pronounced and persistent impact on financial markets than that of the traditional and long-standing influenza index. This presented a signal of the deep-rooted economic, social and geopolitical issues that COVID-19 presented for the global population. First, we need to understand how investors responded in the aftermath of news announcements. To do so, we focus specifically on volatility spillovers from our selected influenza and coronavirus indices, the direction and scale of which offer information as to the manner in which investors responded. Both the coronavirus and influenza indices are found to have directly influenced the face mask index as measured by directional volatility spillovers. As government sanctions escalated, the creation and sale of face masks was one of the first mandatory sanctions, therefore directly influencing sales and cash-flows. Further, we identify evidence of strong volatility spillovers from the coronavirus and influenza indices towards Chinese gold and Chinese oil futures markets during the pandemic, presenting evidence of the existence of traditional flight-to-safety channels during this time. It is important to stress the distinction between Chinese traded oil and the negative pricing experienced in the market for West Texas Intermediate in mid-April 2020. Interactions between both coronavirus, influenza indices and that of Chinese oil as measured by INE futures contracts is a particularly interesting observation. Market rules limit daily price movements to 10%, and INE trading has halted during the most volatile days during this period. Such contracts were the first oil futures to be traded in China, the world's biggest oil importer, and spillovers generated from the outbreak of the COVID-19 pandemic would indicate behaviour similar to that of investors seeking a safe-haven in a similar manner to investment behaviour during the initial stages of the subprime collapse in 2007 and 2008. Similar spillovers into gold contracts add further support to this result. Our results not only show that COVID-19 has had a distinct and lasting impact on Chinese agricultural, energy and financial markets, but we also identify that COVID-19 appears to have had a substantial and pronounced effect on the price of Bitcoin as measured through directional volatility spillovers. This result is found to be robust across alternative modelling structures. Chinese government education provision, with regards to the development and trading of digital currencies, appears to have provided substantive support and confidence in these rapidly developing digital technologies. While such use of digital currency as a short-term store of value will be supported by proponents and opportunistic governments, it will cause substantial alarm amongst a number of regulators and policy-makers who are aware of the broad-range and frequency of both complex and relatively simplistic fraud. Should the COVID-19 pandemic continue to evolve, or indeed, should another pandemic occur in the future, we must consider the role that cryptocurrencies could potentially play during periods of exceptional market stress. J o u r n a l P r e -p r o o f J o u r n a l P r e -p r o o f J o u r n a l P r e -p r o o f Note: Our sample data runs from 05 February 2020 throughout 12 May 2020, representing 1,982 observations. Further analysis and methodological variants were estimated by the authors. J o u r n a l P r e -p r o o f Note: Our sample data runs from 05 February 2020 throughout 12 May 2020, representing 1,982 observations. Further analysis and methodological variants were estimated by the authors. J o u r n a l P r e -p r o o f Note: Our sample data runs from 05 February 2020 throughout 12 May 2020, representing 1,982 observations. Further analysis and methodological variants were estimated by the authors. J o u r n a l P r e -p r o o f Note: Our sample data runs from 05 February 2020 throughout 12 May 2020, representing 1,982 observations.
COVID-19 behavioural changes 4 Amidst the global outbreak of coronavirus disease 2019 (COVID-19), Singapore reported its first case on 23 January 2020 1 . Subsequently, the local 'Disease Outbreak Response System Condition' level was raised to Orange on 7 February 2020 when community transmission began. At this juncture, the government started to emphasise the role that individuals had to play by adopting health-protective behaviours 2 . In an infectious disease outbreak such as the COVID-19 pandemic, individual-level health-protective behaviours can be classified into: (i) preventive behavioursmeasures that can prevent transmission (e.g., hand-washing), and (ii) avoidant behavioursmeasures that decrease contact with other individuals (e.g., avoiding crowded areas) 3 . As COVID-19 is believed to be transmitted primarily through contact or droplet transmission 4 , these measures can be effective in reducing the spread of the virusparticularly when pharmacological interventions are limited 5, 6 . For risk communication, it is useful to understand what characteristics predict whether an individual adopts health-protective behaviours. This allows public health messaging to be targeted, improving compliance in groups that may not do so as readily. For example, in the previous outbreak of severe acute respiratory syndrome (SARS), preventive and avoidant behaviours were more likely to be adopted by: women, older individuals, and those with higher education levels 3 . In the current COVID-19 outbreak, health-protective behaviours have been observed amongst individuals who perceive a higher risk of infection, higher disease severity, or who are afraid of getting infected 7-9 . However, demographic predictors have differed between populations studied: whereas age and gender were linked to behavioural changes in South Korea, these associations were not found in the United Kingdom 8,9 . Further, no demographic predictors were identified in a study in the United States, All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint COVID-19 behavioural changes 5 while genderbut not agepredicted behavioural changes in a cross-country survey 10, 11 . This heterogeneity suggests that the uptake of health-protective behaviours may be context-specific during the COVID-19 pandemic, owingperhapsto heterogeneity in the risks of infection or risk of severe illness between countries. In light if this context-specificity of demographic studies, we conducted a large-scale survey to examine how demographics predict the uptake of healthprotective behaviours in Singapore to characterise our local population. Our study was conducted across March-April 2020, a period when the country saw a rapid increase in COVID-19 cases (from 138 cases at the start of the study, to 9125 cases at the end of the survey period). From 7 March-21 April 2020, 1145 participants responded to an online survey on COVID-19. As the inclusion criteria, participants: (1) were aged ≥21 years old, and (2) had lived in Singapore for ≥2 years. Given public health concerns, participants were recruited onlinevia advertisements placed in community chatgroups (e.g., Facebook and WhatsApp groups for residential estates, universities, and workplaces) or via paid Facebook advertisements targeting Singapore-based users. The study was approved by the Yale-NUS College Ethics Review Committee (#2020-CERC-001), and participants gave written consent in accordance with the Declaration of Helsinki. The questions reported in this study were part of a larger 20minute survey exploring: behavioural and psychological responses to COVID-19, sources from which participants received COVID-19 news, and psychological wellbeing (https://osf.io/pv3bj) 12 . As predictors, participants reported the following demographic details: gender, ethnicity, religion, country of birth, marital status, education, house type, and All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint COVID-19 behavioural changes 6 household size. As behavioural changes may be influenced by the local COVID-19 situation, we also recorded the total number of local cases reported to date, and whether the country was locked down when the survey was done (computed based on the survey time-stamp). As the key outcome variables, participants indicated which of 17 healthprotective behaviours they had voluntarily undertaken because of the pandemic (by indicating 'yes' or 'no' for each behaviour). Based on prior research 3 , we investigated 3 preventive behaviours, asking participants whether they had: (1) washed their hands more frequently, (2) used hand sanitisers and/or (3) wore a mask in public (prior to legislation). Additionally, we investigated 14 possible avoidant behaviours, whether participants had: (1) avoided crowded areas, (2) reduced physical contact, (3) stayed home more than usual, (4) distanced from people with flu symptoms, (5) voluntarily changed travel plans, (6) missed or postponed social events, (7) avoided visiting hospitals and/or healthcare settings, (8) chose outdoor over indoor venues, (9) distanced from people with recent travel to outbreak countries, (10) distanced from people with possible contact with COVID-19 cases, (11) avoided places where COVID-19 cases were reported, (12) stored up more household and/or food supplies, (13) relied more on online shopping (prior to shop closures), and/or (14) avoided public transport. Across the 17 items, we assigned a score of '1' for 'yes' responses, and these were summed to create three scores: the total number of behavioural changes adopted (out of 17), the total number of preventive behaviours adopted (subscale score out of 3), and the total number of avoidant behaviours adopted (subscale score out of 14). Finally, we included as a separate item the following statement: "I did not take any additional measures" (yes/no response). This question allowed us to identify participants who had not made any behavioural changes as a function of COVID-19 a group that may be of higher risk for transmission. All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint COVID-19 behavioural changes 7 To describe participants' demographic characteristics, survey responses were summarized with counts. As the primary analysis, we then ran a linear regression model with the total number of behavioural changes as the outcome measure, and participant demographics as predictors (age, gender, ethnicity, religion, country of birth, marital status, education, house type, household size, the total number of local cases reported to date, and whether the country was locked down at the time of Of 1390 individuals who clicked the survey link, 1145 (82.4%) provided informed consent and participated in the survey. A further 192 (16.77%) participants were excluded from statistical analyses as they did not complete the primary outcome measures (on behavioural changes). Table 1 , the final sample of 953 participants was comparable to the resident Singapore population in: the proportion of Singapore citizens, marital status, and household size (≤10% difference). However, the pool of respondents had a greater representation of females (65.1% vs. 51.1%), university graduates (72.7% All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. On the whole, participants adopted a mean of 8.01 (SD=3.78) behavioural changes owing to the COVID-19 pandemic. This corresponded to a mean of 2.14 (SD=0.81) preventive measures, and 5.87 (SD=3.44) avoidant measures. Only 29 participants (3.04%) reported that they had not changed their behaviours at all. In our first regression model, we sought to predict the total number of behaviour changes based on participant demographics ( Table 2 ). We first observed that behavioural changes tracked the local COVID-19 situation: namely, as the number of local cases increased, individuals adapted their behaviours in response (b=3.03, t(913)=3.96, p<0.001). Having controlled for local transmission, gender emerged as a significant predictor, with females adopting an average of 0.14 more changes than males (t(913)=-4.49, p<0.001). Being married was also associated with a higher number of health-protective behaviours than being single (b=1.09, t(913)=3.52, p<0.001). In our second and third models, we examined whether demographic predictors differed for preventive vs. avoidant behaviours. In terms of demographics, while the adoption of preventive behaviours was predicted by gender (b=-0.241, t(913)=-4.33, p<0.001) and age (b=-0.008, t(913)=-3.11, p=0.001), the adoption of avoidant behaviours was predicted by gender (b=-0.902, t(913)=-3.90, p<0.001) and marital status (being married vs. being single; b=0.973, t(913)=3.45, p<0.001). All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint COVID-19 behavioural changes 9 Finally, in our fourth model, we found that no demographic predictor significantly identified the small proportion of individuals who had not undertaken any measures on account of COVID-19 (all p > Bonferroni-adjusted alpha of 0.002). [ To better understand the pattern of results, we conducted post-hoc chisquare tests to identify which behavioural changes differed as a function of gender and marital status (for all behaviours), and as a function of age (for preventive behaviours). As these were exploratory analyses, the type 1 decision-wise error rate was controlled at 0.05 (uncorrected). Gender. As shown in Figure 1 , females were more likely than males to: 1) wash their hands more frequently, χ 2 (1, N=953)=22.17, p<0.001; 2) avoid crowded areas, χ 2 (1, N=953) = 11.83, p=0.001; 3) reduce physical contact, χ 2 (1, N=953)=9.28, p=0.002, and 4) stay home more than usual, χ 2 (1, N=953)=9.79, p=0.002. Figure 2 , marital status was significantly associated with: avoiding crowded areas, χ 2 (2, N=952)=26.29, p<0.001; 2) staying home more than usual, χ 2 (2, N=952)=28.09, p<0.001; 3) choosing outdoor over indoor areas, χ 2 (2, N=952)=33.04, p<0.001; and 4) relying more on online shopping, χ 2 (2, N=952)=26.37, p<0.001. In each case, single participants were least likely to adopt these behaviours than those who were not single (married, widowed, separated, or divorced). Age. Finally, wearing a mask in public differed between age groups, χ 2 (4,N=953)=33.32, p<0.001), with participants aged 21-30 most likely to adopt this behaviour ( Figure 3 ). As the chi-square analyses examined behavioural changes as a function of one predictor at a time (either gender, marital status, or age), we repeated our analyses by regressing each behavioural change against the All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint COVID-19 behavioural changes 10 full set of demographics (per Models 1-4 previously). Our conclusions did not change, and the full table of chi-square analyses and regression results are reported in the Appendix. [ Table 3 and Figures 1-3 In this study, we documented for the first time how residents in Singapore have adapted their behaviours to minimize COVID-19 transmission. The large majority of participants (97%) have undertaken at least one infection control measure, with participants reporting an average of 8 lifestyle changes owing to the pandemic. As might be expected, behavioural changes increased with the number of COVID-19 cases reported locally. In terms of demographic predictors, health-preventive measures were most likely to be adopted by females and those who were married. When we distinguished between preventive (e.g. hand washing) and avoidant (e.g. avoiding crowded areas) behaviours, age emerged as an additional predictor for avoidant behaviours, with youths most likely to adopt mask-wearing. Collectively, our results on gender and marital status replicate findings from previous infectious disease outbreaks 3, 15 and the current COVID-19 pandemic (based on both an international and a South Korean sample 8,10 ). These findings echo a broader pattern of risk that has emerged in epidemiological research, whereby being female and being married has been linked to the reduced risk of disease and of all-cause mortality 16 . Adding to this body of research, our findings highlight how being willing to adopt health-promoting behaviours during a pandemic may contribute to the resilience of these demographic groups. Departing from prior research and popular belief, however, we found that age was inversely related to the take-up of preventive behaviours. In particular, younger adults in our survey were more likely to wear masks than older adults, even before legislation stipulating that masks had to be worn in public. This finding is remarkable All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint COVID-19 behavioural changes 11 for several reasons. First, during SARS, older adults had been more likely to perform a range of preventive behaviours including mask-wearing, handwashing, respiratory hygiene, the using of utensils, and washing after touching contaminated surfaces (3) . Second, during the current outbreak, several high-profiled events (e.g., coronavirus parties hosted by students) have resulted in the belief that youths are least likely to care about the outbreak, and thereby most likely to ignore infection control measures 17,18 . Indeed, the Director-General of the World Health Organization released a statement telling youths that they were "not invincible", that "the virus could put (them) in hospital for weeks, or even kill (them)" 19-21 . Rather than finding young persons to take on risky behaviours, however, we observed instead that this demographic group was most associated with maskwearing. While this finding is counter-intuitive, it is in line with recent Hong Kong research whereby elderly participantsrather than the youngwere least likely to worry about getting infected, and thus least likely to adopt protective behaviours 22 . Additionally, young persons' ready adoption of mask-wearing may reflect a general willingness to embrace change and innovation, since mask-wearing had not previously been a norm in Singapore (as it had in countries like Japan 23 ). Moving forward, our findings may contribute to the public health strategy in several ways. First, throughout the pandemic, government agencies have repeatedly noted how individuals have ignored official advisories 24,25 . This phenomenon has been so widespread that the individuals have been nick-named 'covidiots' in the popular pressa portmanteau of coronavirus and idiot 26,27 . Beyond 'naming and shaming', however, our research highlights characteristics that may predict noncompliance. This, in turn, will allow risk communication to be targeted. On the other hand, our findings also highlight which demographic groups may be most likely to respond when the government launches a new infection control measure (for example, SafeEntry or TraceTogether for contact tracing). Extrapolating All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint COVID-19 behavioural changes 12 from our research, these initiativesif perceived to be health-protectivemay be adopted first by females and those who are married. Correspondingly, the two demographic groups may be ideal for pilot trials or as advocates for the behaviours. In making these recommendations, we note that our study has several limitations. First, we relied on participants' self-reports, which may be vulnerable to recollection biases. Future research will need to explore whether our findings translate to actual behavioural changes during the pandemic. Second, although our survey methodology captured behavioural changes at one particular time-point, the recommendation of infection control measures is a moving target. In the case of mask-wearing, for example, official advisories changed from masks not being needed, to being encouraged, to finally being mandated (as of 14 April 2020) 28 . Correspondingly, further research is needed to examine whether our findings continue to hold even as official advisories change. In conclusion, we conducted the first Singapore-based study of behavioural changes during the COVID-19 pandemic. Although the scale of this crisis has been unprecedented and many uncertainties remain, many of our findings reinforce longstanding patterns of how demographic characteristics can pre-dispose an individual to diseasein this case, via the uptake of measures that can minimize COVID-19 infection. Moving forward, our findings provide a template by which official messaging can be tailored for health promotion. All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. Aug 6]. Available from: https://www.straitstimes.com/singapore/spore-mindfulof-need-to-calibrate-social-distancing-measures All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint Figure 1 . Uptake of COVID-19 infection control measures as a function of gender. Asterisks indicate significance at p < 0.002 (following Bonferroni corrections), and horizontal lines represent the 95% confidence intervals. All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint Figure 2 . Uptake of COVID-19 infection control measures as a function of marital status. Asterisks indicate significance at p < 0.002 (following Bonferroni corrections), and horizontal lines represent the 95% confidence intervals. All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint Figure 3 . Uptake of COVID-19 infection control measures as a function of age group. Asterisks indicate significance at p < 0.002 (following Bonferroni corrections), and horizontal lines represent the 95% confidence intervals. All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint Appendix 1: Proportion of respondents who adopted each behaviour change, and chi-square results for each behaviour change by gender, marital status and age group Age groups were separated as: 21-30 years old, 31-40 years old, 41-50 years old, 51-60 years old and above 60 years old. All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. Base for categorical variables were female for gender, Chinese for ethnicity, no religion for religion, single for marital status, Singapore for country of birth and yes for lockdown All rights reserved. No reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in The copyright holder for this this version posted August 7, 2020. . https://doi.org/10.1101/2020.08.06.20169870 doi: medRxiv preprint
Health education is very important for the general public to internalize appropriate health information that will serve as guides to their health behaviour. This is because people are likely to exhibit health behaviour based on available information to them. The point to make here is that health behaviour of people is largely dependent on information available to them. He et al. [1] after their study using a Chinese population concluded that health information and health behaviour are key to public health education. Within the context of public health, information possession is vital because poor knowledge about public health issues could have corresponding negative implications on health behaviour. Conner [2] corroborates that information is one of the fundamental cognitive determinants that influences health behaviour. Limaye et al. [3] also acknowledged the important role of information in health education. The media as the fourth estate of the realm occupy a cardinal role in health education. Many decades ago, Flora et al. [4] had outlined the four roles that the media can play in health intervention to include: first is what they called media as educator, the second is media as supporter, the third is media as programme promoter, and the fourth is media as supplement. The four elements look different but can typically be implemented in combination or individually. For example, media as an educator entails that the media have to educate the masses about public health issues. The objective of educating the public about public health issues is to ensure that they are mentally armed with information that will serve as a checklist regarding their behaviour. Kim and Noriega [5] note that the media are critical players in health education. The researchers add that one of the strategies through which the media can achieve this is through a combination of education and entertainment. Okim-Alobi and Okpara [6] hold the view that media provide a formidable platform through which health information can be made available to the general public. In their view, the media are critical players in educating the general public about health issues. The fundamental way through which the media can educate the general public on public health issues is through coverage. This can take place through strategies like frequently reporting public health issues, recommending appropriate health behaviour and suggesting policies that will assist in combating the public health issues reported. Educating the general public about health issues is like preparing them for war. Therefore, where possible, this has to be done ahead of time, not when the health issue become a pandemic. This is important so that it will prepare them to take proactive steps. What this means is that where possible, it is better for the media to educate the general public about health pandemics well ahead of time before there is a confirmed case within their locality. Within the context of coronavirus disease 2019 (COVID-19), local media did not need to wait until there is a confirm case of the virus in their country before educating the general public but its symptoms, prevention and recommend action for the government to take. Unfortunately, previous studies [7, 8] that have examined media coverage did not take this aspect into account. Therefore, in this study, the broad objective was to determine how the media in Nigeria provide warning messages concerning COVID-19 when it was first reported in China. This objective was pursued by comparing media stories about the pandemic when it was first reported in China and when it eventually spread into the country. The specific objectives were as follows: Health education and the surveillance role of the mass media To educate the general public about health issues, the media need to first monitor the environment and identify potential life-threatening health issues and provide adequate information that will enlighten the masses on the issues. Sharma and Gupta [9] say that health education is an essential aspect of public health and health promotion. They add that the goal of health education is to have positive influence on health behaviour of people through information and instruction. That is to say that media coverage is an essential strategy of health education. This is because, through media coverage, the general public will be informed and instructed on issues related to public health issues. The means also that the surveillance role of the mass media is essential in the study of media coverage of health issues. The surveillance role of the mass media requires that they monitor the society and provide information to members of the society on pending dangers. This function is an expression of the 'watchdog role' of the media. This is because when a dog watches over an area, it makes efforts to let people know each time there is a perceived danger. The dog barks and this typically attracts the attention of the public with a corresponding possibility for eliciting actions to avert the danger. Therefore, the surveillance role of the mass media means that they constantly scan the society, evaluate events and highlight areas that pose potential danger to the general wellness of the society. In this sense, the media have a duty to make sure that people are adequately warned of dangers. Donohue et al. [10] must have been referring to the surveillance function of the V. C. Gever and G. Ezeah mass media when they submitted that knowledge is an essential condiment that people need to take informed decisions. This assertion makes a strong case for media workers to constantly monitor the society and provide relevant and sufficient information to members of the general public. The idea behind the surveillance function of the mass media is attributed to Lasswell [11] . In the views of Lasswell, the mass media typically plays three roles to the society. These are as follows: the surveillance of the environment, the correlation of the different components of society in responding to the environment and the passing societal heritage from generation to generation. When this assertion is explained within the perspective of COVID-19, it can be said that the mass media in Nigeria have a responsibility to examine the spread of COVID-19 and coordinate both government and the citizenry on how best to respond to the global health crisis. Based on the surveillance perspective, when the virus broke out in China and was fast spreading to other countries, Nigerian media needed to draw the attention of both the government and Nigerian people on the need to act proactively. For the government, the emphasis could have been on taking policy decisions that will avoid the spread of the virus into the country while also making adequate provision to contain it in case of an outbreak. For the citizenry, the focal point could have been educating them about the virus in areas like symptoms while also encouraging them to imbibe preventive health behaviour. Media performance of its surveillance role has been examined in literature. Gever and Coleman [12] did a study to determine how the media perform their surveillance function within the context of conflict. Their results showed that the conflict between farmers and herdsmen only appear in the media when it is happening. As soon as the conflict reduces, no media reports are made available, no warning sign and news about the conflict disappears in the media. Udeze and Chukwuma [13] carried out a study with the objective to determine public perception of how the broadcast media in Nigeria have fulfilled their surveillance role within the context of security. Their results showed that the public perceive media performance as below expectations. Generally, the media are very important stakeholders in every society. They need to keep vigil of the society, analyse them and draw the attention of the public and the government on the implications of the issues reported. In the process, the media can recommend preventive measures. The overall aim is to make the society a better place. As canvassed by the agenda setting theory, the media are primary factors that shape the society. In the process of performing their functions as ascribed to them in relevant laws (in Nigeria, it is contained in section 22 of the 1999 constitution of the Federal Republic of Nigeria), the media play a role in influencing health behaviour and shaping government policies and programmes in the health sector. The media of communication such as newspaper, magazine, radio, TV and Internetbased media are important agents of any society. It is also in consideration of the important role of the media that they are regarded as the fourth estate of the realm, after the executive, judiciary and legislature. The three important issues in health communication are as follows: health behaviour, health policies and health programmes. Maryon-Davis [14] highlighted three areas that the media of communication can be useful for health promotion to include public information, social marketing and media advocacy. According to the researcher, through public information, the media educate the general public on health issues. With the use of social marketing tactics, the mass media of communication engage with the general public and trigger them to accept and apply certain health behaviour in their daily interactions. Finally, Maryon-Davis adds that with the application of media advocacy strategy, the mass media of communication can raise awareness on health policies with a view to improving the wellness of the general public. It is noteworthy that the three strategies are typically used in combination when using the media for health promotion [15] . Health behaviour is any action that a person engages in that has an implication on his or her health. This explanation is broad because health behaviour has two perspectives. The first angle is health behaviour that is beneficial to a person's health (Conner 2002) . This behaviour can be regarded as positive health behaviour. Such behaviour within the context of COVID-19 may include regular hand washing, social distancing, avoiding touching one's face with unwashed hands, staying at home, among others. On the other hand, there are health behaviours that are dangerous to a person's health. Such behaviour makes a person vulnerable to diseases and ailments. Examples of such behaviour, within the context of the current study, include going to crowded places, shaking hands indiscriminately, among others. Media coverage of health issues is also likely to elicit changes in policies and programmes of government or the introduction of new ones. The government of every country is usually at the forefront of improving the health sector through budgetary provision and other interventions that will make the health sector viable. Over the years, media coverage of health issues and how such coverage impact on behaviour has been examined. Ankomah et al. [16] examined the impact of radio programmes on the utilization of anti-malaria commodities and reported that people who are exposed to radio messages on the benefits of utilizing antimalaria commodities are likely to modify their health behaviour based on the media content than those not exposed. Gupta and Sinha [7] did a study to ascertain how the media report health issues and found that both electronic and print media gave less attention to health matters when compared with other issues like crime, politics and entertainment. Onyeizu and Binta [8] in their study found that even though health issues were well reported in the media, they appeared mostly as straight news story with less prominence given to health issues among the media examined. Galiani et al. [17] examined how media contents on hand washing influence health behaviour. Their result showed that there was no significant link between media messages and hand washing among the sample studied. Bowen [18] did a study to ascertain the link between media messages and the utilization of treated bed net in Cameroon and reported that a significant association exists between both variables. The point to make here is that the mass media of communication are essential in health promotion, health behaviour and policy advocacy. Studies examined above paid less attention in looking at the role of time as moderators of media coverage of health issues in general and infectious diseases in general. COVID-19 was first reported in the city of Wuhan in China in December of 2019. As at that time, it was largely regarded as a Chinese problem that was also going to end in China. According to Wu et al. [19] in an article which was published by Journal of the Chinese Medical Association, COVID-19 was first reported in late December in Wuhan and quickly spread to other places in China and eventually, other parts of the world. In Nigeria, COVID-19 was first confirmed on 27 February 2020. This was after the virus was reported in many other parts of the world like United States, Italy, Russia, among others. This means that Nigeria had ample time to prepare for the outbreak. There have been many global confirmed cases of COVID-19 with several fatalities. The World Health Organization [20] says there is a total of 5 267 419 confirmed cases of COVID-19 as at 25 May 2020. It adds that a total of 341 155 have died of the virus. The situation, when compared with 1 month ago from May 25, is frightening. This is because as at 25 April 2020, there were only 2 710 948 confirmed cases globally and 187 844 deaths. These figures represent multiple increases in the number of cases. WHO also reported that in Nigeria, there were a total of 7839 confirmed cases and 226 deaths as at 25 May 2020. This figure represented an exponential increase because as at 25 April 2020, Nigeria had only 1095 confirmed cases with only 32 deaths, but a month later, these increased more than four times. V. C. Gever and G. Ezeah COVID-19 has proven that the global health system is still vulnerable and that the world is not as advanced in science as the 21st century has made us to believe. COVID-19 has rather shown that the interconnectedness of global economy has made the world vulnerable such that what happens in one country can have a significant impact on the entire world. COVID-19 has impacted significantly on almost every part of the world. It has grounded economic activities, schools are shut down, places of worship have been deserted, international flights have been very limited, if not completely stopped. There is a near total lockdown as people are encouraged to stay at home. Governments of countries have placed restrictions on movement both locally and internationally. Peoples' sources of livelihood have been threatened. The world has many lessons to learn from COVID-19 both now and when it will finally be contained. We made use of agenda setting theory to articulate this study. The theory originated from a 1922 book written by Walter Lippmann [21] with the title Public Opinion. Lippmann in the opening of the book had painted a picture of a 1914 scenario in which a few Englishmen, Frenchmen and Germans lived in an Island with no cable access, but got to know of a British mail steamer that usually circulated once in every 60 days. However, it happened that in the month of September, the mail steamer was yet to visit, but the islanders were still discussing the latest newspaper which its content was about the forth coming trial of Madame Caillaux for the shooting of Gaston Calmette. Lippmann narrated further that the people conglomerated with high expectations on a day in mid-September to know from the captain what the judgment had been. They realized that for over 6 weeks, those of them who were English and those of them who were French had been fighting on behalf of the sanctity of treaties against those of them who were Germans. Lippmann then asserted that for six consecutive weeks, they had behaved as though they were friends, when they were not. By this explanation, Lippmann had painted a picture of the power of the media in setting agenda for the public. The inhabitants of the island did not act as enemies because of their ignorance of what was happening. Although Lippmann did not specifically mention agenda setting power of the media, he provided a sketch in this regard. Lippmann had attributed the images in the minds of the public to media contents [22] [23] [24] [25] [26] . Also, Cohen [27, p. 13] avers that 'the media may not be successful much of the time in telling people what to think, but it is stunningly successful in telling its readers what to think about'. Soroka [28] posits that Cohen's assertion remains the clearest and most frequently cited annunciation of the public agenda-setting hypothesis. A clear postulation of the agenda setting theory was done by McCombs and Shaw after a 1972 study of 100 undecided voters in Chapel Hill to determine the correlation between voters' agenda and media agenda. The result showed a significant correlation between media content and issue agenda. McCombs and Shaw thus conclude: In choosing and displaying news, editors, newsroom staff, and broadcasters play an important part in shaping political reality. Readers learn not only about a given issue, but also how much importance to attach to that issue from the amount of information in a news story and its position. Gever [26] tested the agenda setting theory through a sample of 400 respondents and reported that the media are effective in setting agenda for the general public. Within the context of this study, the media, it can be argued, have the ability to set agenda for the general public when COVID-19 was first reported in China. They could have done this by educating the public to adopt the appropriate health behaviour that will help them survive the health challenge should it spread to Nigeria. Also, the media could have drawn the attention of the government on the pending danger by letting them know why it was needed to take preventive measures. Therefore, we made use of the agenda setting The media and health education theory to ascertain how the Nigerian media exercise its agenda setting ability in times of the COVID-19 pandemic. We carried out this study with the utilization of content analysis. We regarded content analysis useful for the study because in this study, we aimed to examine media contents on COVID-19. We also considered content analysis useful for this study because we wanted to determine if the content of media related to COVID-19 changed from when it was first reported in China and when it spread to Nigeria. We sampled newspapers, TV stations and radio stations for the study. We wanted to make sure that we had a representation of different types of media so that our sample will be rich enough to enable us draw conclusion. Therefore, we sampled two newspapers namely Nation and Daily Sun (both privately owned) and two TV stations namely Nigerian Television Authority (government owned) and Channels Television (privately owned). We also sampled two radio stations namely Dream Fm (privately owned) and Federal Radio Corporation of Nigeria (government owned). It should be noted that there are no national daily newspapers in Nigeria that are owned by the government. That is why there was no government newspaper in the sample selected. We combined both stratify and simple random sampling to select the media houses studied. Therefore, two attributes were used to guide our stratification. They are ownership and media genre. The ownership structures considered were private and government ownership. The media genres considered were newspapers, TV and radio. After the stratification, we randomly sampled the media houses, making sure that all the strata were dully represented. This means that our sample of the media examined was rich in terms of the representation of media with different features. To sample the stories that were examined in the study, the researcher made use of the motif sampling approach. Gever (2018) defines motif sampling strategy as the use of key words to retrieve data from the websites of media outfits. Therefore, we made use of motif approach to retrieve data about COVID-19 from all the media houses selected. We made use of key words like 'coronavirus', 'new confirmed cases of COVID-19', 'Outbreak of COVID-19' among others. After the search results were generated, the researchers screened the stories and sampled only those that were related to COVID-19. To achieve the objectives of this study, the researchers measured the following: Under this category, we were interested in knowing the number of stories on COVID-19 that appeared in the media within the time examined. Therefore, we counted the number of story occurrences. We wanted to know if the format that the media used to report stories on COVID-19 changed over time when there was a confirmed case of the virus in Nigeria and no case was confirmed in the country. Therefore, we made use of story format like straight news which were stories about COVID-19 that report issues about the pandemic with no in-depth analysis. The second category was feature. This was the opposite of straight news. It offered detailed information about issues on COVID-19. In the third V. C. Gever and G. Ezeah place, we look at opinion stories. These are stories that revealed the opinion of people (e.g. experts) about the issue or the views of media houses. In the final place, we looked at public service announcement. These were announcements made by the media houses as part of their social responsibility. Our attention here was to assess if media stories on COVID-19 made recommendations on the preventive health behaviour that people should engage in or not. Therefore, we examined as follows: Recommendation contents. These are contents that suggested recommendations on health behaviour regarding how to avoid contracting COVID-19. No recommendations contents. These are contents that did not suggest recommendations on the health behaviour to adopt so as to prevent COVID-19. In this category, our attention was to determine if media stories made recommendations to the government on how to contain the virus or not. Therefore, the researcher examined the following: Suggestion to government contents. These are contents that made suggestions to the government on how to contain COVID-19. No suggestions made. These are contents that did not make any suggestion to the government on how to contain COVID-19. We made use of the article as the unit of analysis for our study. Therefore, articles that were reported in the media selected, served as our unit of analysis. We made use of the code sheet as the instrument for data collection. We also took a step to make sure that inter-coder reliability was within acceptable rate. We achieved this by allowing random selection of two coders to code 20% of our stories. Subsequently, we made use of Krippendorff's Alpha (KALPHA) to evaluate the inter-coder reliability with the application of SPSS 22 version. After the analysis, we arrived at inter-coder reliability of 0.91 for story frequency, 0.79 for story type, 0.79 for health behaviour recommendation, while 0.78 was realized for suggested government intervention. We utilized a combination of descriptive statistics and inferential statistics in our analysis. Therefore, we deploy simple percentages among descriptive statistics and chi-square among inferential statistics. We made use of tables to present our results. We examined the stories on COVID-19 from the six media selected for the study. Our results yielded a total of 537 stories on the subject matter. We examined these stories taking into account the objectives of the study and the result is presented in Table I The objective of Table I is to find out the frequency of media coverage of COVID-19 before it was confirmed in Nigeria and when it was confirmed in the country. The result of the study showed that the media did not provide sufficient stories on COVID-19 until when there was a confirmed case in Nigeria. The result of the chi-square analysis showed a significant link between Nigeria's COVID-19 status and the frequency of coverage, The media and health education v 2 ¼ 34.302 (5), P < 0.05. The degree of the relationship was determined with the use of coefficient contingency (C) and this yielded C ¼ 0.244, interpreted as 24.4%. In Table II , we wanted to know if a difference exists in the story type in media coverage of COVID-19 when it first broke out in China and when there was a confirmed case in Nigeria. Our result showed that before there was a confirmed case in Nigeria, the story type was largely straight news with little varieties in story type. However, when there was a confirmed case in Nigeria, the media used different story types like straight news, feature stories, opinion stories and public announcement to report the health issue. We found an association between Nigeria's COVID-19 status and the story type used (v 2 ¼ 93.014 (3), P < 0.05). The degree of the association was explored with coefficient contingency (C) and this yielded C ¼ 0.372 interpreted as 37.2%. In Table III , we wanted to determine if media stories on COVID-19 made suggestions on the appropriate health behaviour for the general public. We found that before the outbreak of the virus in Nigeria, only 33% of media stories made recommendations on health behaviour to be adopted by the masses. However, after the outbreak, this increased more than twice (72%) the figure. The result of the chi-square analysis showed v 2 ¼49.084 (1), P < 0.05, an indication that the results were significantly related to the status of the outbreak in Nigeria. Hence, there was a significant association; we took the next step by examining the degree of the association. We achieved this with the use of coefficient of contingency (C). Our result showed C ¼ 0.308, interpreted as 30.8%. In Table IV , the researchers sought to ascertain if media reports made recommendations on policies and programmes that government should implement so as to contain the outbreak of the virus. The result of the study showed that only 28.6% of the stories made recommendations to government on policies and programmes before the outbreak of the virus in Nigeria. There was a marginal increase to 37.4% when the virus was confirmed in Nigeria. We further cross-tabulated status of the outbreak in Nigeria with policy recommendations. The result showed v 2 ¼ 2.213 (1), P > 0.05 at 0.05 level of significance. Therefore, we concluded that the status of the outbreak in Nigeria was not significantly linked to policy recommendations. In this study, we investigated the assumption that COVID-19 status of Nigeria is significantly associated with how the local media perform their surveillance role of highlighting the dangers of the virus to the general public and the government of Nigeria. The composition of our sample took into consideration characteristics of media such as ownership (private and public) and genre (print and electronic). Therefore, we examined six media made up of two newspapers, two TV stations and two radio stations. We tested the surveillance role of the mass media with specific attention to frequency of coverage, story type, recommended health behaviour and recommended policies and programmes for the government. We classified our analysis into two broad time frames. That is, when Nigeria had no confirmed case and when Nigeria had a confirmed case. We found that there were generally few stories about the virus prior to when Nigeria had a V. C. Gever and G. Ezeah confirmed case (Table I) . This is despite the fact that the virus was ravaging other countries of the world. Our second key finding was that before Nigeria had a confirmed case, stories about the virus were largely in straight news format. However, this changed when Nigeria had a confirmed case as the media examined made use of different story formats like features, opinion and public service announcement. This was in addition to straight news (Table II) . A similar situation was noticed regarding recommendation on appropriate health behaviour. When Nigeria had no confirmed case, the few stories about the health issue paid less attention to life-saving health behaviour that the masses should adopt. This sharply changed when there was a confirmed case in the country (Table III) . However, media stories on recommendation of policies and programmes did not follow a similar trend. This is because even though there were few stories about the virus that recommended policies and programmes to the government, it appeared this did not significantly change even after Nigeria had a confirmed case. This implies that Nigeria media generally failed to suggest policies and programmes to the government regarding the containment of COVID-19. Our results have extended previous studies [7, 8] on media coverage of health issues by including the role of time in such coverage. With this addition, we hope that health reporters, health communication experts and policy makers in the health sector will be better guided in their understanding of the role of media in health promotion and education. Within [24, 26, 29] , our result showed that media agenda setting on health issues can be improved on through the provision of early health warning messages. Based on the findings of this study, we conclude that the media in Nigeria did not provide sufficient warning messages about COVID-19 when the virus was yet to spread to Nigeria. It is also our conclusion that even after the virus was confirmed in the country, the media did not provide sufficient information aimed at propelling health policies and programmes vis-a-vis COVID-19. This conclusion suggests that improvement is needed in media provision of warning health messages as well as policy-related contents. Drawing from the results of this study and the conclusion derived, the researchers make four suggestions for further studies. First, we focused more in looking at the role of COVID-19 status on media coverage; further studies should pay close attention to how the individual media genres report the virus. In the second place, it is recommended that further studies should examine the influence of media messages on health behaviour related to COVID-19 as well as on environmental practice. Additionally, considering the significant impact COVID-19 is having on the society, it is recommended that further researchers should examine how political news has changed in times of the health pandemic. Finally,
treatment or SARS-CoV-2 vaccine at present, which poses a great threat to the lives and health of people worldwide. In China, the healthcare and hygiene conditions of rural areas are poorer than those of the urban areas, with rural residents possessing poor knowledge about protection from diseases. 1 Furthermore, this epidemic occurred during the Chinese New Year, a time when large numbers of rural residents return to their hometowns, thus leading to an increased risk of an epidemic. Notably, disease control in rural regions is one of the major areas in the prevention and control of an epidemic. Therefore, the Government of the People's Republic of China swiftly employed forceful, orderly, scientific, and thorough measures to effectively prevent the spread of the disease in rural areas. This ensured the health and safety of the public as a whole, healthy economic development, social stability in the rural communities, and enabled rural areas to become the home front for national stability. There were five major areas in the COVID-19 prevention and control experience of China's rural areas, all of which involved different components and focus areas. First, an organization-along with an effective operation mechanism-was established to lead and clearly define the responsibilities of COVID-19 prevention and control and to carry out the tasks involved. On the basis of existing regulations on the response to public health emergencies, the governments of the various provinces, autonomous regions, and municipalities directly under the central government set up local epidemic emergency response headquarters after the COVID-19 outbreak. The relevant departments of the local governments at or above the county level were responsible for handling the emergency response to the outbreak within the scope of their respective functions and duties. Once COVID-19 had become an epidemic, committees in villages and towns immediately organized forces to unite and cooperate on mass disease prevention and treatment measures, assisting the health administrative department and other relevant departments as well as medical and health institutions in the collection and reporting of information, the dispersion and isolation of personnel, the implementation of public health measures, and the conveyance of infectious disease prevention and control knowledge to the villagers. During the epidemic period, the state council issued a notice on further improving the prevention and control of the pneumonia epidemic caused by the novel coronavirus in rural areas. Under the leading group for epidemic response and following the working mechanism for joint prevention and control, local party committees and governments set up special classes on epidemic prevention and control in rural areas to strengthen the unified command and dispatch measures, to adhere to the integrated deployment and promotion of prevention and control in rural and urban areas, and to effectively guarantee the needs of materials, funds, and personnel for epidemic prevention and control in rural areas. For example, in the Fuyang District of Hangzhou City (Zhejiang Province), the whole region implemented a four-level linkage mechanism of "organizing groups and joining villages" to carry out the epidemic prevention and control work. According to the overall planning requirements of the main leaders, which included all township cadres and village (Party member) representatives, a total of 276 epidemic prevention teams were set up in 276 administrative villages in 24 townships (subdistricts) of the region. More than 1600 township cadres were assigned to the teams to carry out COVID-19 epidemic prevention and control, thus achieving full coverage of the region. During the period of group association, members of the town and street leading group were responsible for contacting the villagers, grasping the whereabouts of the mobile personnel in the village, paying attention to the personnel trends, and regularly reporting to the main leaders of the town and street leading group. In the event of any special situation, the town and street leading group would contact the emergency response team immediately and report to the district leading group. Township cadres clarified specific responsibility blocks through zoning; regularly collected and summarized changes in the epidemic situation in the area; and organized the promotion of epidemic prevention knowledge, the control of personnel entry and exit, and other investigations. The village representatives (Party members) were divided according to their place of residence and were in close contact with the households, being responsible for understanding the situation of each household in real time (via telephone, WeChat, etc.), collecting information on the actual difficulties faced by each household, and reporting any emergencies. All the villages in the district had established a village-level defense network, with the town and village cadres and village (Party members) representatives acting as the main body, and the anti-epidemic work was thus carried out in an orderly manner. Third, the monitoring of migrant staff, returning staff, and rural residents was strengthened, and autonomous epidemic control based on local conditions was carried out. Epidemic prevention and control is a battle that requires the participation of everyone, and rural residents must also be guided and mobilized to participate as a joint prevention and control effort. As many staff and students returned to the rural areas before the Chinese New Year, the pressure of prevention and control in the rural areas was increased. Therefore, Chinese villages viewed the quaran- such as the following: refusing to cooperate with epidemic prevention, quarantine, compulsory quarantine, and isolation treatment by deliberate escape, malicious obstruction, violent resistance, and other means; disturbing the medical order of a hospital, injuring medical staff, or preventing state functionaries from performing their official duties according to law; knowingly entering a public place or coming into contact with others while concealing his or her status of being possibly or definitely infected with the novel coronavirus; fabricating and spreading false information related to the epidemic situation, knowingly spreading false information on the information network or other media, spreading rumors, making false reports of dangerous situations or the epidemic situation, or deliberately disturbing public order by other means; disturbing the work of epidemic prevention and control groups and disrupting public order by bidding up prices or hoarding supplies for profit; and refusing to carry out the commands issued by the various levels of the People's governments during the period of epidemic prevention and control. In addition, it is insufficient to rely solely on top-down resource allocation and grassroots cadres for epidemic control. Therefore, the characteristics and situation of each village were used to actively mobilize the public. That is, existing rural administrative organizations and social organizations were utilized; and WeChat groups, public accounts, and mobilization books were all used to mobilize the masses to participate in grassroots epidemic prevention and control. Mobile investigation teams, village patrol teams, anti-gathering persuasion teams, and volunteer teams for assisting vulnerable people were established. A medical system that complies with various epidemic control regulations was set up to maintain social order and focus on reconstruction and prevention. During the epidemic, many rural organizations, retired soldiers, high school students, and ordinary villagers in the rural areas were frontline staff who carried out tasks ranging from cleaning, to the disinfection of public places, and guard duty. These people used professional knowledge for psychological counseling, assisted in promotion work, donated funds and materials, assisted in the collection and distribution of materials, and aided in the organization of epidemic-related data, updating, and verification work so as to truly achieve joint epidemic prevention and control based on society-wide efforts. Fourth, the promotion of guidance for the epidemic prevention and control work, along with the promotion of epidemic-specific laws, was strengthened. Guidance promotion was strengthened via promotional banners, village 2. Guaranteeing the flow of capital: To ensure that farmers had access to capital, credit guarantee-related fees for the agricultural industry were reduced/waived, disaster relief funds for the agricultural industry were disbursed as soon as possible, Central Finance development funds for the agricultural industry were distributed to key epidemic regions, and the coordination of local fiscal funds was strengthened. 3. Optimizing administrative review and approval services: The Government of the People's Republic of China has requested various levels of agricultural departments to simplify review and approval procedures, shorten their duration and efficiency, and continuously innovate services through online work. All these measures will drive the resumption of work and production in the agricultural industry. 4. Driving the resumption of work by migrant workers: Migrant work concerns the livelihood of farmers. While resuming work and production in companies in China, epidemic prevention and control for the resumption of migrant work should be carried out to strengthen the effective linkage between import and export regions. 5. Ensuring the availability of the daily necessities for agricultural production: The guarantee that materials and tools are available for agricultural production necessitates smooth transportation to villages and the maintenance of access to exclusive roads. To completely restore normal agricultural production and vehicular access so that agricultural products are able to leave places of production and materials for agricultural production can in turn enter these places, the blockade of all disease transmission channels is a prerequisite. Rural COVID-19 prevention and control work is an important component of this epidemic's prevention and control. However, in rural areas, there is a shortage of medical resources and the diagnostic and treatment levels are poor. Therefore, COVID-19 control work in rural areas should be adjusted according to local conditions, widespread promotion should be carried out to improve the disease control knowledge of the public, and the masses should be mobilized for participation in epidemic prevention and control work. In China, a strict COVID-19 control network was set up and responsibilities were strictly defined to ensure its efficient operation. In addition, existing medical resources were fully utilized so that the COVID-19 prevention and control work in rural areas could be carried out by rural grassroots medical institutions and rural physicians. This COVID-19 epidemic has brought many new crises to rural areas. However, crises are also opportunities to develop rural regions. At the grassroots level, rural grassroots governance will be optimized to guide the healthy development of rural organizations. With regard to rural environmental governance, the rural toilet revolution was launched, and village cleaning activities were strengthened to comprehensively drive domestic garbage governance and domestic wastewater processing in rural areas. With regard to economic development in the agricultural industry, the collective economic activity of rural areas was strengthened and a modern and intelligent agricultural industry was developed.
A s of April 2020, Spain was one of the countries accounting for the most coronavirus disease (COV-ID-19) deaths (1) . More than half of those deaths occur in persons >80 years of age (2) , which highlights the vulnerability of the elderly. Moreover, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) can be easily spread within nursing homes, causing outbreaks with high associated mortality rate (3, 4) . By the beginning of April, the exponential increase of cases overwhelmed the healthcare system in Spain. In this context, rapid outbreak identification and early intervention in nursing homes was needed. At Vall d'Hebron Hospital, a tertiary hospital in Catalonia, Spain, we conducted test-based screening as a containment measure to promptly implement effective prevention and control measures in nursing homes. We present the early results of a coordinated intervention with primary care teams in ≈6,000 residents and facility staff in nursing homes in our catchment area. We evaluated 69 nursing homes that had a total census of 6,714 persons. We excluded previous laboratory-confirmed cases of COVID-19. During April 10-24, an integrated team of hospital and primary care staff obtained samples for SARS-CoV-2 testing from all residents and workers: nasopharyngeal and oropharyngeal swab samples both combined in the same collection tube with viral transport media. We used a commercial CE-IVD-marked, real-time reverse transcription PCR-based assay (Cobas SARS-CoV-2; Roche Diagnostics, https://www.roche.com) on a Cobas 6800 system. Each nursing home director recorded any symptoms present at least 48 hours before the scheduled day of testing for all residents and staff. According to the World Health Organization case definition of a suspected case of COVID-19, a person was classified as symptomatic if fever or acute respiratory symptoms were present at any moment during the preceding 14 days. In the absence of either, the person was considered to be asymptomatic. We obtained a total of 5,869 samples, 3,214 from residents and 2,655 from facility staff. Overall, 768 (23.9%) residents and 403 (15.2%) staff members tested positive for SARS-CoV-2 (Table) . The presence of fever or respiratory symptoms during the preceding 14 days was recorded in 2,624 residents (81.6%) and 1,772 staff members (66.7%). Among those testing positive and for whom we had information about symptoms, 69.7% of the residents and 55.8% of staff were asymptomatic. On the basis of laboratory results, we planned specific infection prevention and control measures, adapted to facility characteristics in <72 hours. The most relevant measures applied included isolation of infected residents, establishing cohorted areas and designated staff, excluding infected staff from work, ensuring proper supply of personal protection equipment, and training staff about contact-and droplet-based precautions. We established coordinated follow-up evaluation with primary care teams and facility directors. COVID-19 heavily affected nursing homes, causing uncountable deaths in Spain (5,6). Restriction policies for visitors in nursing homes were described as part of the state of emergency declared on March 14 (7), but a national guideline to reduce the risk for SARS-CoV-2 transmission in these settings was not available until March 24 (8) . Moreover, despite knowledge of community transmission starting in late February, widespread testing for SARS-CoV-2 was not available until mid-April. Our data show an overall high prevalence of SARS-CoV-2 infection in residents and staff, noting a high transmission in these settings. Specific aspects of nursing homes (shared rooms or bathrooms, physically or cognitively impaired residents requiring high-demand care, rotating staff working in different facilities) and a limited adoption of prevention and control measures as reported by our teams are some factors that may explain these results. Among those with known symptom status, we found a high proportion of asymptomatic cases: 69.7% of infected residents and 55.8% of infected staff. Our study had several limitations. The ascertainment process could lead to misclassification due to atypical symptoms in the elderly. Furthermore, crosssectional symptom assessment and testing did not allow us to differentiate between presymptomatic and asymptomatic cases. Nevertheless, these values are consistent with a study performed in a nursing facility in King County, Washington, USA, in which 56% of the residents testing positive were asymptomatic (9) . Given that presymptomatic and asymptomatic transmission has been demonstrated (10), our data suggest that asymptomatic cases could have had an important role in transmission dynamics. Symptoms-based approaches would have failed to correctly identify cases and therefore continued transmission. Furthermore, testing of facility staff should be included as part of the prevention and control measures, because they may contribute to sustained transmission. In conclusion, the high prevalence of SARS-CoV-2 cases found in nursing homes highlights that this vulnerable population requires special attention and proactive interventions in coordination with the primary care teams. In the context of established community transmission of SARS-CoV-2, we recommend implementing test-based screening irrespective of symptomatology in nursing homes as the best approach to rapidly implement prevention and control measures. L euconostoc lactis is an intrinsically glycopeptide-resistant but ampicillin-susceptible, gram-positive, facultative anaerobic coccus (1) found in food products including dairy products, vegetables, and wine. L. lactis is a very rare pathogen associated with bloodstream infections (2) . Staphylococcus nepalensis is a novobiocin-resistant coagulase-negative staphylococcus also found in food products, such as dry-cured ham and fish sauce, that has not been reported as a human pathogen (3) (4) (5) . Neither L. lactis nor S. nepalensis is part of normal human bacterial flora (2, 3) . A 71-year-old man with hypertension and hyperlipidemia sought care for upper abdominal pain and vomiting after a meal at his son's restaurant. A computed tomography (CT) scan showed collapse of the lower esophagus wall and expansion of the mediastinum; medical staff diagnosed a spontaneous esophageal rupture and performed emergency surgery. Surgical findings demonstrated a 5 cm perforation of the lower esophagus with no rupture to the thoracic and abdominal cavity. The final diagnosis included Boerhaave syndrome, esophageal hiatus hernia, and mediastinitis. Two sets of blood culture taken on day 1 were positive for gram-positive cocci, which we identified by matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry as L. lactis in an aerobic bottle (10.7 h to culture) and an anaerobic bottle (13.3 h to culture) and S. nepalensis in 1 anaerobic bottle (24.3 h to culture). The 2 bacteria were indications of true bacteremia; therefore, we escalated ampicillin/sulbactam (treatment to piperacillin/tazobactam for L. lactis (Appendix Table 1 , https://wwwnc.cdc.gov/EID/
Dental settings have one of the highest risks of infection transmission (Jamal et al., 2020; Mohebati et al., 2010) . Therefore, the COVID-19 pandemic has presented significant challenge for dental students and infection control measures. Data presented here was extracted from a survey conducted among Palestinian dental students in their clinical study years to evaluate their readiness to return to dental care provision during the COVID-19 pandemic. A total of 305 dental students from Al-Quds University (AQU) and Arab American University (AAU) completed the questionnaire in mid-May, 2020. Thirty-four percent of the current sample (n=103) perceived COVID-19 as very dangerous and 84.3% (n=257) believed that COVID-19 is a serious public health issue. Fifty-five percent (n=168) did not consider themselves prepared for this outbreak and 66.2% (n=202) did not think that their outpatient clinics' infection control measures prior to COVID-19 are adequate to receive patients during this pandemic. Eighty-eight percent of the students (n=269) admitted to fear of transmitting the virus to family and friends. This fear was mainly related to their perception that standard precautions used in dental settings are inadequate and make it unsafe to deal with patients during the current pandemic (2= 50.45, p <.001) . Thus, 82% of students (n=250) preferred to avoid working with COVID-19 suspected patients. This perception of unsafety related to the prior-to-COVID-19 infection measures also impacted the level of confidence these students had in dealing with COVID-19 patients (2= 25.8, p = ,.01) . Only 26% (n=80) of the students had "considerable-to-great" level of confidence in handling suspected COVID-19 patients ( Figure 1 ). It is obvious from current data that students' confidence in handling COVID-19 patients and the fear of transmitting infection to family and friends were related to their perception of the inadequacy of standard infection control protocols used prior to COVID-19. Therefore, dental schools need to invest in the new infection control measures placed by national authorities, and adopted by universities as their new norm. As an example, AQU followed a very strict protocol in reopening their student dental clinics and ensured all advanced PPE needed to implement these protocols. This should be accompanied by periodic updating of students' knowledge about infectious diseases and control measures. This article is protected by copyright. All rights reserved Another important point that needs to be addressed by dental schools following the COVID-19 pandemic is how to change current teaching philosophy to make it more resilient for future pandemics and crises. First, dental schools need to teach their students not to depend solely on the current restorative model and to learn alternative evidence-based treatment options that focus on prevention, minimal intervention, and less aerosol generation. Examples are Atraumatic Restorative Treatment, Hall Technique and the use of Silver Diamine Fluoride (SDF) in disease stabilization (de Amorim et al., 2018; Khan et al., 2019; Slayton et al., 2018) . Second, students in this sample believed that they have an important role in educating patients about COVID-19; this sense of responsibility needs to be maximized in emphasizing the importance of dentists' role in pandemics in providing care and supporting other frontline health care providers when needed. Dental students need to view themselves not only as excellent dentists but also as practicing healthcare professionals providing oral health within the context of systemic health and infection prevention. This article is protected by copyright. All rights reserved The authors would like to thank each anonymously participating dental student of Al-Quds University (Palestine) and Arab American University (Palestine) who contributed to making this work possible. The authors declare that they have no competing interests. The data that supports the findings of this study are available from the corresponding author upon reasonable request.
S imilar to previous coronavirus epidemics, the novel severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) virus is spread primarily by droplets or contact from symptomatic individuals; however, SARS-CoV-2 appears to also be transmissible from individuals who have not yet displayed symptoms. 1 The virus replicates to high titers in the upper airway with high degrees of shedding during the first week of symptoms. 2 Additionally, aerosolization of the virus can occur with certain procedures, such as endotracheal intubation or extubation. The high rate of transmission poses significant risks to all health care workers, other patients, and bystanders without appropriate preparation. 3, 4 The first case of coronavirus disease 2019 (COVID-19), the disease caused by the SARS-CoV-2 virus, was identified in Massachusetts on February 1, 2020. But it was not apparent how the virus would affect the Commonwealth until a month later when a cluster of 70 cases was identified among attendees of a scientific conference in Boston. This was approximately the same time that the World Health Organization declared a global pandemic on March 11, 2020. 5 In preparation for the pandemic, we reviewed relevant literature and recommendations specific to anesthetic care, which appropriately focused on operating room preparation 6 and patient management during airway manipulation and transport, 4, 7, 8 with a notable paucity of literature detailing preparation tailored to obstetric anesthesia. 4, 9 The delivery of anesthesia on the labor and delivery (L&D) unit is distinct from the care in the intensive care unit (ICU) or in the operating room. In many cases, the method of delivery is unknown until the end and may emergently change with little notice. Clinicians must also prepare for unexpected operative and nonoperative procedures such as management of postpartum hemorrhage or emergent cesarean delivery. Thus, the impact of SARS-CoV-2 in obstetric anesthesia required the creation of parallel workflows to simultaneously deliver high-level care to pregnant patients with and without COVID-19 for labor analgesia, cesarean anesthesia, and other procedures. While we appreciated that the rapid spread of this virus would expedite the time to the presentation of the first patient in our L&D unit, the first patient with COVID-19 was admitted to our unit overnight for observation before planned preparedness steps had been completed. This unexpected admission resulted in a significant waste of material supplies, because our infection control consultants recommended all disposable supplies in the patient's room to be discarded following their stay. This report describes the processes we subsequently used to rapidly adapt our obstetric anesthesia service and the solutions to reduce waste, maintain safety, and support effective care of patients with confirmed or suspected SARS-CoV-2 infection. Because the SARS-CoV-2 virus will not likely disappear until effective vaccines are developed, the disease will continue to spread to locations that are not currently heavily affected and maybe ill-prepared to care for the patient while keeping health care workers safe. Our goal is to provide materials that may assist others in improving their units and discuss a system that can be used for rapid preparation during a future crisis. 10 This article reports all appropriate components of the Revised Standards for Quality Improvement Reporting Excellence (SQUIRE 2.0), published on September 15, 2015. This was classified as a quality improvement study and was determined to be exempt from institutional review board review and did not require informed consent. The study was performed in a tertiary care facility and teaching hospital for Harvard Medical School serving an urban area with a metropolitan population of 4.6 million. The medical center is the regional referral center for the Beth Israel Lahey Health network which delivers 15,000 pregnant patients, annually. Our intent was to develop a system of care that would satisfy several aims: • Provide full and simultaneous services for both infected and noninfected patients • Rapidly adapt to new workflows • Maximize safety for staff and patients through standard practices • Minimize risk of contamination during procedures • Optimize supplies and materials The development of these critical adaptations was performed using cyclical improvement methodology in combination with plus-delta debriefing. These tools were used to create workflows consisting of sequential checklists and procedure-specific packs. Workflows were distributed via e-mail to all clinicians, recorded on video and made widely available, posted as laminated pages in appropriate locations, and published on the hospital intranets. Each of these was updated with each change in a workflow. We used a cyclical improvement methodology to design each new workflow. Cyclical improvement is based on the Plan-Do-Study-Act methodology introduced by W. Edwards Deming for learning and improvement. 11 The initial step was the creation of a process map detailing each step of the workflow, including donning and doffing personal protective equipment (PPE), detailing every step a clinician may take during a procedure. We also defined possible deviations of expected outcomes, for example, when additional materials may be needed, or a change in anesthetic plan. After the creation of the initial process map, we performed small-group in situ simulations using a clinician, an observer, and an event recorder. The clinician simulated performing each step read to them by the recorder, including the use of equipment and medications. The observer's role was to (a) confirm that all steps were completed, and (b) identify where breaches in protocol could result in substandard outcomes. Based on simulation findings, the workflow was revised, and the cycle was repeated. After achieving a workflow that was stable, the process was presented to clinicians for use in patient care. After each case, a debriefing was conducted, and the workflow was updated based on these findings. Based on previous experience at our center, we modified the plus/delta format for debriefings of our processes and workflows after each real-time test. This method is commonly used and well described in aviation training. 12 Our experience is that this exercise lends itself well for rapid cycle improvement. The debriefing team leader begins the session by directing focus on the events and processes (the System) as opposed to any individual actions. Participants are notified that commentary or concerns with individual clinical performance will be addressed separately. Participants are prompted to discuss what went well in the System (plus); this strategy is intended to ensure that the strengths of the System are identified and are not changed in future iterations. The debriefing leader then focuses the discussion on processes that could be improved or changed (delta). This may resemble a short, focused brainstorming session where clinicians recommend alternate workflows or ideas for improvement. The debriefing ends with a request that additional ideas for improvement be brought forward at any time. Debriefings following neuraxial labor analgesia procedures with COVID-19 patients were performed with the director of obstetric anesthesia and frontline clinicians. Debriefings after each operative procedure on COVID-19 patients involved frontline clinicians, plus leadership personnel from the Divisions of Obstetric Anesthesia and Quality and Safety, L&D Nursing, the Department of Obstetrics and Gynecology, and the Department of Neonatology. The labor analgesia workflow consisting of the checklist, procedure-specific packs, and guidance graphics underwent 2 cycles of small-group simulation. Before any opportunity for further refinement, the workflow was then urgently required to be used for clinical care. After each use, the workflow underwent redesign using the cyclical improvement process. Within a week, opportunities for refinement were no longer being identified during debriefings. The sequential checklist is presented in Figure 1 . Preparation for operative procedures represented greater complexity due to the range of distinct modes of anesthesia that can be required, the number of collaborative services involved, and the need to redesign the procedural space for COVID-19 patients. Our L&D operating room preparation process drew from the perioperative COVID-19 pathways under development for the general operating rooms at our institution by the Division of Quality, Safety, and Innovation of the Anesthesia Department, and in consultation with Infection Control. Preparation for operative procedures in patients with COVID-19 included dividing isolation space for infected patients into distinct work zones (clean area, transition anteroom, and contaminated procedure room) that minimized the risk of contamination. The unit was separated such that 1 operating room and 4 labor rooms were sealed from approach to the rest of the unit, with the hallway representing a transition anteroom (Supplemental Digital Content, Figure 1 , http://links.lww.com/AA/D222). Each labor room was stocked with the minimal necessary equipment while operating room preparation was based on reports from previous epidemics. We removed all nonessential materials and supplies from the operating room and wrapped remaining surfaces in plastic covering (Supplemental Digital Content, Figure 2A , http://links.lww.com/AA/D222). The perioperative case workflow was distributed to a multidisciplinary group including representatives from obstetrics, maternal-fetal medicine, obstetric anesthesia, neonatology, and the anesthesia division of quality and safety. The group was able to perform 1 cycle of cognitive review. Unfortunately, before attempting in situ simulations to refine the workflow or disseminate and train frontline staff, the process was urgently needed for clinical care. Each use of the workflow was followed by the cyclical improvement process, including thorough team debriefing and redesign, until achieving a final form, which took approximately 11 cycles (Figure 2 ). We initially expected the team leader to be a physician but found that the anteroom nurse had the greatest situational awareness and was best suited to this task. While the identification of the clinicians who enter the operating room with a patient was clear, defining the order of caregivers leaving the room was challenging. Especially with the emergence of general anesthesia, we wanted to minimize the number of individuals in the operating room while still having resources to deal with emergencies. Additional supplies are frequently needed during procedures; thus, we designated a "runner" for both nursing and anesthesia who waited in the anteroom and would be contacted by the nurse inside the procedure room via hands-free communication headset. Because of the expected low frequency of both general anesthesia and postpartum hemorrhage among our patients, we enclosed supplies for these contingencies in a cart housed in the pared-down operating room that would be sealed to prevent contamination but easily accessed when required (Supplemental Digital Content, Figure 2B , http://links.lww.com/AA/D222). When there was a need to perform these procedures, the cart would be unsealed; unused supplies would be discarded, and the cart and reusable supplies would be decontaminated. We found this to be far superior to a plastic bag, especially for heavy and bulky supplies. Finally, we used an easily decontaminated metal cart as a work surface when a debrief identified that the anesthesiologist had no place to organize supplies (Supplemental Digital Content, Figure 2C , http://links.lww.com/ AA/D222). To avoid wasting supplies and to minimize the time required for decontamination, we decided not to use the neuraxial supply cart that we normally bring into the room for procedures. Instead, we composed a list of minimum supplies to be stored in procedurespecific packs. Plastic bags containing the necessary supplies were assembled and labeled for various clinical scenarios. To accompany each pack, we developed a list of just-in-time items that would need to be obtained immediately before the procedure, such as medications and ancillary supplies that could not be stockpiled. These were printed on a paper and affixed to each pack to minimize the need for clinicians to call out requests for additional materials during a procedure. Individualized procedure packs were developed for: • Neuraxial for labor (Figure 3) • Spinal or combined spinal-epidural anesthesia ( Figure 4 ) • Conversion of labor epidural to cesarean anesthesia • General anesthesia The SARS-CoV-2 virus and associated COVID-19 pandemic place significant pressure on the obstetric anesthesia care provider to simultaneously care for infected and noninfected patients. Multiple parallel plans must be made for labor analgesia, cesarean anesthesia, emergent conversion from labor to cesarean, and the management of acute complications such as postpartum hemorrhage. In addition, these plans must be coordinated with the obstetric, nursing, and neonatology services in a way that does not increase risk to patients or clinicians. In translating recommendations from governmental organizations and the major societies into clinical guidelines, we realized that variability in individual interpretation could lead to deviation from best Figure 3 . The procedure-specific card for neuraxial procedures for labor analgesia. This card details the contents of the preassembled pack, and also the items that the clinician collects immediately before a procedure, including medications and additional materials. This paper is attached to the pack. BMI indicates body mass index; PF, xxx; TB, xxx. www.anesthesia-analgesia.org aNesthesia & aNalgesia OB Anesthesia During practices that carries higher risk of accidental contamination. 13, 14 This is especially critical during donning and doffing PPE. 15 We chose to define standard operating procedures in the form of checklists to ensure the completion of critical steps for clinical care; however, as we simulated performance of a neuraxial labor analgesia procedure, we came to appreciate that it would be easy for clinicians to become contaminated if individual steps were performed out of sequence. We changed from a traditional checklist to one that explicitly defined the temporal sequence of steps. The sequential checklist minimizes deviation from a standard operating procedure and ensures the necessary steps to always provide a "clean" layer of gloves and coverings. Additionally, having an observer who ensures that each step is followed is crucial to protecting ourselves and our colleagues. 13 That both the first labor analgesic and the first cesarean were performed by clinicians who were not engaged in the development of our COVID-19 workflows suggests that this method can be used to enforce a standard operating procedure in a novice population. Clearly, these checklists do not take the place of education and training of a skilled workforce but can be used in an emergency to reduce the risk of error. Using a cyclical improvement approach allowed us to rapidly design and iteratively refine our workflows after each live case, and to achieve final products very quickly. We see important advantages to the inclusion of frontline clinicians in the cyclical redesign process: stakeholders gain the expectation that the processes will continue to evolve over time, thus reducing the frustration of a constantly changing protocol, and related gains are tied to the sense of buy-in created among clinicians who feel that their input will play a part in the evolution of workflow. Recent difficulties with the medical supply chain nationally were reflected in our hospital and left us acutely aware that the wastage of supplies would impact our ability to care for patients. Before this pandemic, the usual method of obtaining materials was to either stockpile supplies in cabinets inside the operating room, or to carry them in a specialized cart. Because of the risk of contamination, unused supplies in the patient location need to either be decontaminated or wasted after use by a COVID-19 patient. Our procedure packs specific for each anticipated type of anesthesia encounter simplified, standardized, and minimized clinical supplies. We are unaware of a case when the wrong pack was chosen for a procedure, but this is likely to happen at some point. Our designation of a "runner" to deliver supplies to the procedure room would allow the correction of this error. Our workflows and checklists, as well as the redesign of our procedural areas, reflect institutional needs and practices. In a broader view, we believe that the methods we used for adaptation can be used to refine practices at other institutions and in other situations . Procedure-specific pack contents for neuraxial anesthesia (spinal or CSE) cesarean delivery. The right side of the card identifies the contents of the preassembled procedure bag that a clinician will pick when performing a spinal or CSE for cesarean delivery. The left side of the card identifies the medications that are needed to be removed immediately before placement. BP indicates xxx; CSE, combined spinalepidural; EKG, xxx; LR, xxx; TB, xxx. that require rapid practice changes. In addition to what we present, the Society for Obstetric Anesthesia and Perinatology (SOAP), 9 Obstetric Anaesthetists' Association (OAA), 16 and Anesthesia Patient Safety Foundation 17 have published a number of resources to consider when preparing an obstetric anesthesia service. Both obstetric organizations recommend early epidural placement during labor, avoidance of general anesthesia, and training and simulation of critical tasks, such as donning/doffing and patient transport. Video laryngoscopy is suggested if general anesthesia is required. SOAP recommendations include the screening of all patients admitted for scheduled/elective procedures and the use of teleconferencing to minimize contact with patients. The OAA resources include additional checklists, which might be useful for adaptation. In conclusion, we share here our obstetric anesthesia pathways for dissemination, because they may be of assistance to other centers experiencing similar challenges related to the COVID-19 pandemic. We also describe the tools that we used to develop these workflows, because they comprise a system that can be generalized to any crisis where a rapid change in processes is needed. E
Currently, there is a global outbreak of COVID-19 caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). On March 11, 2020 , the WHO has declared COVID-19 as a pandemic and as on 14 th Novemebr 2020, there are a total of 52487476 confirmed cases and 1290653 deaths. (1) The clinical spectrum of COVID-19 varies from asymptomatic to clinical conditions characterized by respiratory failure that necessitates mechanical ventilation and intensive care unit (ICU) support. Severe pneumonia progressing to acute respiratory distress syndrome (ARDS) is the cause of death in majority of cases. The death rate ranges from 2-15% in different group of patients. (2,3,4) Currently, there is no vaccine or specific antiviral drug available for COVID-19. The treatment is symptomatic, and oxygen therapy represents the major treatment intervention for patients with severe infection. Based on severity of symptoms, there are several scoring methods which can predict the outcome of disease. The NEWS is a popular scoring system used for non-ICU patients suffering from acute illness. (5) This score helps in quickly determining the degree of illness and intervention required. It also provides likelihood of death or admission to an ICU. Based on this score, any illness can be categorized as mild (Score <4), moderate (Score 5-6 or individual 3) and severe (Score ≥ 7). According to this scoring system, COVID-19 patients having a score of 5-6 will have probability of 15.7% critical events (ICU admission or 30-day mortality) and those having score≥7 will have about 24.1% critical events. (5) The pathogenesis of COVID-19 pneumonitis appears to involve a cytokine storm with elevated pro-inflammatory cytokines such as IL-6 and TNFα among many others, leading to respiratory failure. (6) Radiation therapy (RT) in low doses (<100 cGy) is known to have its antiinflammatory action by downregulating pro-inflammatory macrophages and upregulating antiinflammatory macrophages (interleukin-10, transforming growth factor β 1) and natural killer (NK) T cells thus countering the immune reaction incited by COVID-19. (7) Low dose lung irradiation can potentially mitigate the severity of pneumonitis thereby reducing the risk of death. It was a popular treatment of viral pneumonias until 1940s. (8) Historical data (9-13) suggests that LDRT to whole lung can possibly prevent cytokine storm and ARDS. A few recent trials (14, 15), though with small sample size and short-term results have shown encouraging results with LDRT doses ranging from 0.5-1.5 Gy. We therefore conducted a pilot trial to study the feasibility and clinical efficacy of LDRT to lungs in the management of patients with COVID-19. This pilot study was conducted at our Institute to assess feasibility and clinical efficacy of LDRT in patients with COVID-19. The study was initiated after approval by Institute Ethics committee (Ref. No. IEC-465/22.05.2020, RP-01/2020). It is registered on ClinicalTrials.gov (NCT04394793) and Clinical Trials Registry India (CTRI/2020/06/025862). The study protocol was designed jointly by the radiation oncology team and Institute COVID Management Team (comprising of members from departments of Internal medicine, pulmonary medicine, intensive care, hospital infection control committee). The sample size of 10 patients was determined based on multiple factors like incidence of COVID-19 disease, severity of disease, risk of viral exposure to radiation oncology team, availability of radiation therapy resources, distance between the COVID-19 indoor unit and radiation therapy machine etc. Eligible patients (as per inclusion criteria) were enrolled in the study after obtaining their consent. In order to avoid viral exposure to team, video conference facility was used while counseling them and taking their consent. Complete details about the trial, steps involved, benefits or side effects of treatment were explained to every patient. Eligibility Criteria: Eligibility criteria included age more than 18 years, diagnosis of COVID-19 confirmed by RT-PCR and moderate to severe illness (NEWS ≥ 5). Generally, the febrile patients who were already admitted to our indoor unit for COVID-19 management and having respiratory rate of >24 per minute and or oxygen saturation of <94% were screened for inclusion as they were likely to fulfill the eligibility criteria. Patients requiring mechanical ventilatory support or having unstable hemodynamic status were excluded. As COVID-19 virus is highly contagious and poses a risk to the various staff members during laboratory and radiological investigations, we adopted NEWS for inclusion of patients as well as response assessment as this is mostly based on clinical parameters (Table 1) . We targeted patients having moderate to severe disease to assess the efficacy of LDRT. The study outcome measures were: 1) number of ICU admissions or deaths; 2) improvement in NEWS score post LDRT and 3) length of hospital stay post LDRT. Treatment and workflow to RT machine: All patients were treated as per the Institute COVID-19 standard management guidelines along with intervention of LDRT to both lungs with a dose of 70cGy in single fraction. The standard medical treatment generally consisted of oxygen supplementation, antibiotics, dexamethasone and general supportive care. LDRT was delivered employing 2 opposed antero-posterior and postero-anterior open portals. We did not use CT based RT planning as the prescribed dose is very low and the expected dose to organs at risk (OAR) is negligible. Additionally, CT simulation would increase the risk of viral exposure to staff. For patient transport from indoor unit to RT machine (Varian True Beam Radiotherapy System Linear Accelerator), the corridor was isolated. Patients were made to wear personal protective equipment (PPE), or mask covering face and head before they were shifted to RT area. The patient was positioned supine with hands above head. Machine gantry was rotated to 90 degree and a lateral chest X-ray portal image was taken in order to guide in measuring dose prescription point. Gantry was rotated back to 0 degree; an adequate field size was opened in order to cover both lungs. Upper border of the field was kept above the superior edge of lateral end of clavicle extending 1 cm superior to lung apex on portal imaging. Lower border extended below at the level of the L1 vertebrae (transpyloric plane) in order to cover costophrenic angle. Lateral borders were in the air on both sides of chest. A dose of 70 cGy in single fraction was prescribed at the midpoint of anterior-posterior chest separation using 6MV X-ray photons. Oxygen saturation level was continuously monitored during the entire treatment by placing the pulse oximetry monitor facing the CCTV camera. After completion of treatment, the patient was transported back to indoor unit through the same corridor. The Linear Accelerator unit was sanitized as per the cleaning and disinfection guidelines provided by the vendor. Response Assessment: The response assessment was done mainly on clinical parameters as per the NEWS. Imaging, routine hematological investigations and serum markers levels (like C-. CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 18, 2020. ; reactive peptide, D-dimer, interleukin-6, ferritin etc.) were done as and when needed, but not mandatorily for response assessment. The NEWS was recorded on Day 3, Day 7 and Day 14. These scores were compared with baseline score recorded on the day of LDRT i.e. on Day 0. Clinical response was defined as subject achieving NEWS of 0 within 14 days following LDRT. Failure was defined as ICU admission anytime after LDRT or death within 30 days. Patients were generally discharged from the hospital after they attained NEWS of 0 along with negative test for COVID-19. They were contacted on phone for any further required information. A total of 10 patients were recruited and treated from June to Aug 2020. Table 2 shows various clinical characteristics of the patients. All the patients were male with a median age of 51 years (range 38 to 63 years). The shortness of breath was the commonest symptoms, found in all patients. The median respiratory rate was 22 per minute (range 21-27 per minute). Majority of patients (8 out of 10) had NEWS of 5-6. Three patients had co-morbidities (hypertension, 2 and diabetes, 1). LDRT was delivered after a mean of 3 days (range 1-9 days) of indoor admission (Table 3) . Two patients (patient no. 3 and 8) received LDRT after 6 and 9 days of hospitalization respectively as they were having mild symptoms initially. All patients completed the prescribed LDRT. Average time taken for LDRT (from entering to exiting treatment room) was 11 minutes (8-34 minutes). One patient took unexpectedly longer time during treatment (34 minutes) due to technical problem in switching on the machine. Another patient took 19 minutes since the radiation therapy technologist had difficulty in starting the treatment due to fogging inside wall of face shield. No patient required RT interruption due to deterioration of vitals or oxygen saturation. Table 3 shows the progress in NEWS post LDRT. One patient, showed clinical deterioration and had to be intubated. He finally succumbed of ARDS on day 24. Rest 9 patients had complete clinical response and finally discharged from the hospital after their COVID test was negative. The average hospital stay of the cured patients was 15 days (range 10-24 days). Fig. 1 depicts the speed of clinical recovery. Most patients achieved a NEWS of 0 by Day-7. Day-30 clinical status was determined by communicating the patients or relatives on phone. All 9 patients discharged from hospital were alive. No patients showed the signs of acute radiation toxicity. The results of our study have proven the feasibility of using LDRT for treatment of COVID-19 patients with moderate to severe risk disease. All patients completed the prescribed treatment without any hurdles. None of the radiation therapy team members involved in LDRT of these patients acquired COVID-19 infection. We used simple, open field technique for RT to minimize the radiation planning and delivery time. No patient showed any signs of acute toxicity and therefore it is clinically safe. In terms of clinical improvement, we observed 90% clinical response rate in our study which This suggests that LDRT is clinically efficacious in COVID-19 patients with moderate to severe illnes. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 18, 2020. ; Only one patient, who had associated co-morbidity (HT) did not respond to LDRT and died of ARDS. It is well known that the patients with co-morbidities have higher risk of mortality. The other 2 patients in our series (patient no. 5 and 7) who had co-morbidities had good response and recovered within a week and discharged in less than 2 weeks of LDRT. Overall results of our study have been very encouraging and justifies the conduction of future randomized controlled trial with larger sample size. Besides clinical efficacy, LDRT also helps in averting the need of ICU admission. ICU care consumes lot of manpower resources and cost which are constrained in developing countries like India having high incidence of COVID-19. The initial report using X-rays to treat patients with pneumonia was in 1907 by Edsall and Pemberton (9). Subsequent reports till 1940s suggested that LDRT was successful in decreasing the mortality rate in untreated patients from about 30% to 5-10%. (10-13) In 1943, Oppenheimer (13) reported the results of LDRT using 50 rontgen in 56 patients of viral pneumonia and observed cure rate of 80%. During that time, there was no concept of lung correction factor and therefore the actual dose delivered to lungs in Oppenheimer's study was more than 50 rontgen. He further observed that if radiation was delivered in the first week of disease phase, the cure rate was 100% as compared to 50% when delivered after 2 weeks. Despite good clinical results reported in these trials Recently, two studies (14, 15), to the best of our knowledge, have been published reporting the use of LDRT for COVID-19 and ours is the third one being reported here. Table 4 shows the comparison of these two studies with our present study. Our study has the largest sample size (10 vs 5 vs 5 patients). Both trials (14, 15) had higher median age and co-morbidities as compared to ours. LDRT dose was highest in the study by Hess at al. (15) Though it is difficult to compare clinical outcome due to small sample sizes in all three studies (Table 4) , the recovery rate was relatively higher in our study (90% vs 80% vs 80%). This is probably because our patients had relatively less severe disease and lower frequency of co-morbidities. We believe LDRT is most effective in averting the cytokine storm and therefore may be used before the cytokine storm has set in. Therefore our study was designed to enroll patients with moderate to severe illness. Castillo et al. (17) have recently published a report of 64 year old patient treated with LDRT. They adopted CT based planning prescribing a dose of 1.0 Gy with VMAT technique. Clinical target volume (CTV) consisting of whole lung and (OAR) were contoured. A circumferential 5mm and craniocaudal 10-mm PTV expansion was created. Procedure duration consisted of 30 minutes planning and about 13 minutes of treatment delivery (including cone beam CT scan performance). The patient showed improvement after 3 days of LDRT and shifted out of ICU after 6 days. We do not encourage CT based planning considering negligible dose to OAR due to low prescription dose. Although it is hypothesized that LDRT potentially mitigate the COVID-19 pneumonitis by inducing an anti-inflammatory effect; animal, human and in vitro studies indicate that LDRT may have the potential to control bacterial pneumonia. (18) Therefore, LDRT may also be capable of reducing bacterial co-infections in patients with COVID-19. Additionally, LDRT might prevent . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this this version posted November 18, 2020. ; accelerated viral drug-related mutation thus potentially improving the immune response by means of the enhanced RNA damage compared to antiviral therapy. (18, 19) Whenever RT is employed for benign conditions, concerns are expressed about risk of radiation induced carcinogenesis. It is often ignored that even without radiation exposure; a healthy human being does carry a certain amount of life-time risk of developing cancer. The excess absolute risk (EAR) induced by radiation exposure is determined by the below formula proposed by Preston (20) considering the β = 10, θ = − 0.05 and γ = 1. According to this formula, the increase in EAR is about 0.4% for 0.5 Gy and 1.2% for 1.5 Gy over a 20 year period. The LDRT involves only thoracic organs as OAR rather than whole body exposure thus further minimizing the risk. This EAR is negligible considering the potential benefit of LDRT in the current pandemic. Our study has several limitations including small sample size and response assessment without radiological or laboratory investigations. We wanted to keep the study design simple and convenient for smooth conduction considering the fear of radiation exposure amongst the patient population and also the panic in radiation therapy team which is primarily a non-COVID team. There are several ongoing trials (ClinicalTrials.gov Identifier: NCT04427566, NCT04572412, NCT04377477, NCT04493294 and NCT04393948) with primary outcome assessment based on clinical parameters. (21) In conclusion, the results of our pilot study suggest that LDRT is feasible and clinically effective in COVID-19 patients having moderate to severe disease. Based on the encouraging results of two recently published trials (14, 15) and our study, it is justified to conduct large randomized controlled trials to establish the clinical efficacy of LDRT to reduce the COVID-19 mortality. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 18, 2020. ; https://doi.org/10.1101/2020.11.16.20231514 doi: medRxiv preprint • A low score (NEWS 1-4) should prompt assessment by a competent registered nurse who should decide if a change to frequency of clinical monitoring or an escalation of clinical care is required. • A medium score (i.e. NEWS of 5-6 or a RED score) should prompt an urgent review by a clinician skilled with competencies in the assessment of acute illness -usually a ward-based doctor or acute team nurse, who should consider whether escalation of care to a team with critical-care skills is required (ie critical care outreach team). o A RED score refers to an extreme variation in a single physiological parameter (i.e., a score of 3 on the NEWS chart in any one physiological parameter, colored RED to aid identification; e.g., heart rate . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 18, 2020. ; https://doi.org/10.1101/2020.11.16.20231514 doi: medRxiv preprint • A high score (NEWS ≥ 7) should prompt emergency assessment by a clinical team/critical care outreach team with critical-care competencies and usually transfer of the patient to a higher dependency care area. 1 10 3 6 1 0 0 Alive 2 24 2 5 0 0 0 Alive 3 18 9 5 0 0 0 Alive 4 15 1 7 2 1 0 Alive 5 13 1 5 0 0 0 Alive 6 24 2 5 6 9 -Dead 7 15 2 7 4 0 0 Alive 8 12 6 6 1 0 0 Alive 9 14 2 6 3 0 0 Alive 10 12 2 5 1 0 0 Alive Abbreviations: LDRT = low dose radiation therapy; NEWS = national early warning score; ICU = intensive care unit. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 18, 2020. ; https://doi.org/10.1101/2020.11.16.20231514 doi: medRxiv preprint . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted November 18, 2020. ; https://doi.org/10.1101/2020.11.16.20231514 doi: medRxiv preprint
J o u r n a l P r e -p r o o f Classified as a pandemic by the World Health Organization on March 11, 2020, a marketplace in Wuhan, China has been identified as a hotspot for the early spread, and perhaps origin, of the novel coronavirus COVID-19 [1] . Since the outbreak began in December 2019, the virus has spread to more than 200 countries with the global fatalities presently exceeding 367,000 as of 31 May 2020 [2] . Extreme forecasts predicted that >2.7 million people could die of COVID-19 in the USA and UK alone [3] . The restrictive measures implemented to limit disease spread have involved evacuated schools and university campuses, cancelled sporting events and public gatherings, broad-scale travel bans, and stay-at-home ordinances. Byproducts of these measures include widespread unemployment, closure of many small and independent businesses, geopolitical discourse about globalization, and an economic recession sweeping the world almost as swiftly as the disease itself. This calamity leaves the world's governments and thought leaders searching for answers. Such answers are urgent not only for human health but also for conservation. J o u r n a l P r e -p r o o f We recommend that the most immediate and fair priority is critical scrutiny of wildlife trade. First, the criminality of such trade must be taken seriously. Governments, regulators, and wildlife authorities should not tolerate blind-eyes, loopholes, or the negligence of legislation that is now vividly exposed not only to conserve wildlife, but also to save human lives. Furthermore, the contours of illegality should be extended. Currently, wildlife can be legally traded for a variety of consumptive and consumerist purposes at costs, sometimes devastatingly measurable to human health, all too often to animal welfare and conservation, and which COVID-19 reveals now to be extraordinarily high. The use of animals (e.g., consumptive, medicament, pets, or ceremony) however, are so diverse around the world that they defy simple arguments or The COVID-19 pandemic has illustrated the extent to which human communities are linked. Diseases emanating from a single marketplace can spread around the globe in months. Members of both science and society have now stridently called for the outright banning of markets like the one where COVID-19 originally spread. Such calls are understandable, both as humane reactions to the gravity of the COVID-19 pandemic and as tactical efforts to rapidly promote changes that might otherwise take decades to enact. But in the desire to make the post-COVID-19 world a better one, both for humans and animals, the details matter [14] . We note here that millions of people around the world depend on meat, often wild-caught, traded in markets and rural communities for subsistence [15] . Sometimes, unacceptably, people illegally kill threatened species, but more often they harvest wildlife that can be taken both legally and sustainably, where sanctioned harvest systems exist [15] . There are a variety of very good reasons to reduce human dependence on illegally-harvested wildlife for subsistence. Importantly however, these are long-term goals requiring fierce attention to the multi-faceted and highly variable details inherent to the diverse coupled human and natural systems around the world and feasible only beyond the time-scale affordable for COVID-19 disease control and human health improvement. J o u r n a l P r e -p r o o f preserve the rapidly-dwindling biodiversity that remains. Therefore, without taking our eyes off the long game (e.g. carbon neutrality, strategic agriculture, reduced meat dependence, greater appreciation of conservation value), there is an obvious need, and opportunity, for immediate change. Less obvious, but gravely important, is how best to attend to the details of that change, and these details matter greatly. We suggest that a socially-just analysis of the diverse risks and ramifications of trade in wildlife, illegal and legal, should be the priority starting point.
The label natural, especially when associated with food, supplements or drugs, is becoming essential for most consumers. After decades looking for underground resources, mainly non-renewable, human beings are rediscovering the potential of plant extracts in human nutrition and health. The major goal of plant-based dietary supplements is to maintain or improve the overall health condition, prevent chronic diseases, and to fill the nutritional gap of daily diet [1] . Following the COVID-19 world pandemic, nutraceuticals have recently been in the spotlight as potential therapeutic compounds against RNA viruses like influenza and coronavirus [2, 3] . The possibility of reducing the risk of developing various diseases by using supplements has created a multibillion-dollar industry [4, 5] . The expected growth in demand in the coming years sets the interest in defining improved and economically competitive methods to produce natural bioactives that can be used as food ingredients and nutraceuticals. Fruit and vegetable processing waste has an untapped potential as a source of specific bioactive compounds ranging from proteins to essential fatty acids and polyphenols [6] [7] [8] . In Europe, juice production residue ranks as the 5th main contributor to the total yearly food waste, accounting for approximately 3% of the total weight disposed [9] . This waste is nowadays mainly used as livestock feed, soil fertilizer, for the production of biofuels or discarded in landfilling [10] . Recovery of bioactives from food processing waste thus appears as an attractive opportunity that has received attention in recent years [11] [12] [13] . Black chokeberry (Aronia melanocarpa) is a perennial shrub that produces small dark berries with a very high polyphenol content. Their health-promoting effect was already known by Native Americans for the treatment of cold [14] , but since then many other effects have been discovered. According to literature, the strong antioxidative properties of chokeberries may be effective in the treatment of health disorders related with oxidative stress and cancers [15, 16] . Other effects under investigation are anti-inflammatory effects used for the protection of blood vessels [17] , antibacterial activity [18] , prevention and treatment of noncommunicable diseases [19] , decrease of the risk of diabetes and increase effectiveness of insulin [20] . To this great interest in the effects on human health corresponds the increasing focus on the study of the bioactive compounds in chokeberry and its processing byproducts [21] . In the extensive review from Denev et al. [22] chokeberry fruits, juices and concentrates are reported as rich sources of proanthocyanidins, anthocyanins, flavonols, flavan-3-ols and hydroxycinnamic acids. Kapci et al. [23] reported total phenolics, flavonoids and anthocyanins in a variety of chokeberry products including the fruit, juice, concentrate and pomace, and observed that the pomace had the highest content in dry weight basis. Some research works have focused on pomace valorization targeting the extraction of total phenolics, commonly quantified as gallic acid equivalents [24] [25] [26] . Among polyphenols, anthocyanins are perhaps the subgroup that has attracted more interest due to their many applications either as bioactives or natural colors [27, 28] . Table 1 shows an overview of the research works dedicated to the extraction of anthocyanins from chokeberry juice pomace. The anthocyanin extraction yields reported for chokeberry pomace are mostly within the range of 5-20 mg/g DW (dry weight basis) with few exceptions in the higher range of 66-115 mg/g DW. It is important to note that the juice production methodology influences the composition of the pomace and its potential. Vagiri and Jensen [29] compared the pomace obtained by different juice production methods and observed anthocyanin yields of 1.14 mg/g FW (fresh weight basis) for pomace obtained after enzymatic treatment followed by hot pressing (50 °C); and 1.86 mg/g FW when the pomace was produced by cold pressing (2 °C). Oszmiański and Lachowicz (2016) [30] reported yields in pomace obtained from crushed and uncrushed berries of 66.5 and 115.7 mg/g DW, respectively. As it could be expected, the pomaces obtained in milder juice processing conditions retain a larger fraction of the anthocyanins, and thus can reach higher absolute yields in the recovery step. The comparison of different extraction techniques should bear this information in mind in order to be fair to the maximum potential of any given feedstock. The extraction solvent is typically selected according to the polarity of the target component, in order to maximize the solubility and mass transfer. As shown in Table 1 , alcohols and alcohol-water mixtures are the preferred solvents. However, from a processing perspective that needs to be [37] upscaled to produce food ingredients or dietary supplements, water remains the safest, cheapest, most readily available and environmentally friendly option. Most of the research works reported in Table 1 have been dedicated to the ultrasound-assisted extraction (UAE) of chokeberry pomace. D'Alessandro et al. [32] explored the UAE process in the following conditions: solvent: 0-50% ethanol (v/v); temperature: 20-70 °C; sonication power: 0-100 W; and duration: 0-4 h. In their study, they report an optimum yield of 12.0 mg cyanidin-3-O-glucoside equivalents (CGE)/g DW at 34% ethanol, 70 °C, 100 W, and 17 min. Extended treatment resulted in anthocyanin degradation, which occurred at all temperatures higher than 45 °C. In the same study, they emphasize that if the use of ethanol or heating is not desired, the same yield can be reached using water at 20 °C for 55 or 184 min, with and without sonication, respectively. The poor thermal stability of anthocyanins is well-known. Sui et al. [38] tested the stability of anthocyanin aqueous solutions stored at temperatures in the range of 4 to 65 °C and observed that higher temperatures were detrimental to anthocyanin preservation. Mauricio et al. [39] observed degradation of anthocyanins during decoction of sour cherry liquor pomace at 100 °C and recommended to keep the extraction process temperature at 25 °C. Wozńiak et al. [37] compared three different extraction methodologies: homogenization in methanol, stirring in 50% ethanol and supercritical CO 2 extraction; and reported anthocyanin yields of 18.12, 2.45 and 10.02 mg CGE/g extract, respectively. Homogenization in methanol showed a superior performance, probably because the extraction was repeated 5 times with fresh solvent on the same material. In the same study, supercritical CO 2 extraction required of large amounts of ethanol as co-solvent (80% w/w) in order to increase the polarity of the solvent mixture. Dulf et al. [35] evaluated extractability of anthocyanins in chokeberry pomace after solid phase fermentation with two different microbial cultures incubated at pH 5.5. The anthocyanin yield increased from 5.01 mg CGE/g DW to 6.1 and 5.9 mg CGE/g DW when using cultures of Rhizopus oligosporus and Aspergillus niger, respectively. The maximum yields were observed upon 2 days incubation, after which the yields decreased. Anthocyanins are known to be more stable in slightly acidic media. Ekici et al. [40] studied the stability of extracts obtained from red cabbage, black carrot and grape in the pH range from 3.0 to 7.0; and observed that anthocyanins were more stable in the lower pH range. Howard et al. [41] tested the stability of anthocyanins in chokeberry juice at the pH levels of 2.8, 3.2 and 3.6 and observed that in this range the stability also increased with acidity level, however to a minor extent. Most of the works reported in Table 1 use formic acid or hydrochloric acid as acidity modifiers. In the present study, the acidification of the solvent was performed by addition of citric acid, a colorless, odorless, water-soluble and non-toxic polycarboxylic acid commonly used in foods, beverages and pharmaceuticals [42] . Most of the studies reported in Table 1 performed the extractions at a constant value of liquid-solid ratio. However, the characteristics of the final extract are also affected by the amount of solvent used in the process. In general, the use of large liquid-solid ratios results in higher amounts of total product extracted. However, the extracts produced can become very diluted, while at the same time the total amount of buffer salts or acidity modifiers in the final product rises to an unrealistic level. Besides the extraction techniques already mentioned, other methodologies have been reported in the literature to aid in the recovery of bioactives, including microwave heating [43] , combination of high temperature and pressure [44] , electroporation [45] or enzymatic treatment [46] . As discussed above, given the particularity of anthocyanins and their stability, a thorough examination of the process conditions and their relationship is key for the successful implementation of the recovery process. The present study focuses on the identification of the optimal operative conditions for the sustainable extraction of anthocyanins from chokeberry juice production waste using homogenization and extraction in aqueous citric acid. The conditions studied are the citric acid content in water, temperature and liquid-solid ratio. Response surface methodology was used to construct an empirical model that relates the process factors to the observed responses. The empirical model was then used to identify the maximal predicted responses, which are discussed from a process-oriented perspective. Furthermore, the selected set of process conditions was replicated in the laboratory and compared with the predicted results. The aim of this study is not only to provide an optimum for the recovery process in laboratory scale, but also to open the discussion on the suitability of the different sets of conditions in the production of bioactives. Black chokeberry (Aronia melanocarpa) pomace obtained by cold pressing was provided by the juice production facility Elkaerholm (Egtved, Denmark) in November 2017. After the pressing step, the pomace (moisture content 65 ± 1 wt%) was immediately stored at − 20 °C and thawed at 5 °C before processing. Exhaustive extractions were performed using analytical grade methanol (VWR Prolabo, Søborg, Denmark) and 37% v/v hydrochloric acid (Sigma Aldrich, Søborg, Denmark). Aqueous extraction media were prepared using demineralized water and citric acid monohydrate (99.5% w/w) (Sigma Aldrich, Brøndby, Denmark). HPLC eluents were prepared using milliQ water (ELGA PureLab ® Chorus, Glostrup, Denmark), analytical grade trifluoroacetic acid (Sigma Aldrich, Søborg, Denmark), and HPLC grade acetonitrile (VWR Prolabo, Søborg, Denmark). Cyanidin-3-Ogalactoside (≥ 97%), cyanidin-3-O-glucoside (≥ 96%) and cyanidin-3-O-arabinoside (≥ 95%) were purchased from Extrasynthese (Genay, France). Total extractable anthocyanin in chokeberry pomace was assessed following the process reported by Dinkova et al. [47] with minor modifications. Shortly, 500 mg of thawed pomace were suspended in 15 mL acidified methanol (1% HCl v/v). The mixture was vortexed for 10 s, homogenized in an UltraTurrax T18 (IKA, Aarhus, Denmark) for 5 min, sonicated for 20 min, and finally centrifuged (5000 rpm, 22 °C, 5 min). The supernatant was collected, and the pellet was re-extracted with two more portions of 15 mL acidified methanol. The resulting supernatants were pooled together, brought up to 100 mL, and analyzed. The anthocyanin content obtained was taken as a reference for 100% total anthocyanin content (TAC) in the pomace (Fig. 1 ). The aqueous extractions were performed in batch mode on 1 L jacketed glass extractors coupled to an external thermostatic bath. 1000 mL solvent were heated to the desired temperature shown in Table 2 and the pomace was added. The extraction mixture was then homogenized using an Ultra-Turrax T18 (IKA, Aarhus, Denmark) operated at 9500 rpm for 30 min. The homogenization treatment allowed mixing and re-circulation of the particles in suspension in the extraction liquid. Sampling was performed at regular time intervals, as shown in Fig. 2 , by pipetting 10 mL aliquots from middle depth in the reactor, in order to ensure that the sample was well mixed and representative of the whole system. A control extraction test without homogenization was performed for 60 min. The samples collected were centrifuged (5000 rpm, 22 °C, 5 min) and the supernatant was further analyzed for total anthocyanin content, pH and total dissolved solids. The experimental plan followed a face-centered central composite design with three factors and three levels, requiring a total of 15 experiments. The experiments were performed in random order in three individual replicates (n = 3) and the results are shown as mean ± standard deviation. The factors and levels tested are shown in Table 2 . The range of temperatures (T) selected was 30-70 °C following literature data for optimal extractability and minimum degradation of anthocyanins as summarized in Table 1 . The ranges of liquid-solid ratios (LSR) and citric acid content (CA) were selected based on the following constraints: (1) that the pH of the mash should be ≤ 3.0 for stability reasons, and (2) that the weight ratio of citric acid added and total anthocyanins in the extract should be ≤ 50, in order to give a realistic boundary. Anthocyanins were quantified by high-performance liquid chromatography (HPLC) (HP 1200 series, Agilent Technologies Aps, Naerum, Denmark) equipped with a photodiode array detector. The stationary phase was a C18 column The particle size distribution of the samples was measured in a LS 13 320 Laser Diffraction Particle Size Analyzer (Beckman Coulter, Inc., Krefeld, Germany), and the total dissolved solids were measured as Brix degrees using a digital refractometer (A. Krüss Optronic GmbH, Hamburg, Germany). The pH was measured using a digital pH meter equipped with a PHG301 electrode (Radiometer Analytical, Copenhagen, Denmark). Statistical significance (p < 0.05) was assessed by one-way analysis of variance (ANOVA) and Tukey's test using IBM SPSS Statistics version 26. Response surface methodology (RSM) was used to evaluate the effect of the parameters on the extraction of anthocyanins and construct the predictive model. The software Statistica version 13.5 (TIBCO Sofware Inc.) was used to optimize the anthocyanin concentration as well as the total anthocyanin extracted as a function of the extraction temperature, citric acid concentration and liquid-solid ratio. RSM was also used to predict the final pH and total dissolved solids in the extract. The response surface plots were developed using the fitted quadratic polynomial equations obtained from the regression analysis, changing two variables while the third factor was held at a constant value. This section describes the optimization of the extraction of anthocyanins from chokeberry juice pomace. First, the total extractable anthocyanin content in the feedstock is quantified using sequential exhaustive extractions in acidified methanol. The process to be optimized consisted of homogenization and extraction using aqueous citric acid. The duration of the homogenization treatment was selected targeting a constant particle size distribution in the mash. The extractions were performed under different conditions following the experimental design shown in Table 2 . The experimental outputs measured were used to construct the empirical model based on response surface methodology. The quadratic equations were then used to assess the influence of the process conditions in the responses, and the optimal conditions were rationalized and discussed from a process-oriented mindset. The selected strategy was finally replicated in the laboratory and compared to the values predicted by the model. Four monoglycosylated anthocyanins were identified in the chokeberry pomace extracts based on their retention time and spectral properties: cyanidin-3-O-galactoside (62%), cyanidin-3-O-arabinoside (30%), cyanidin-3-O-xyloside (4%), and cyanidin-3-O-glucoside (2%). The total anthocyanin content in the pomace was 62.8 ± 5.5 mg/g DW (Dry Weight) as measured by exhaustive extraction with acidified methanol. This content is taken as a reference for the maximum yield of 100% Total Anthocyanin Content (TAC) in this feedstock. The value reported falls towards the higher range of anthocyanin yields reported in Table 1 , being in close agreement with the yield reported by Oszmiański and Lachowicz (2016) for the pomace of crushed berries of 66.5 mg/g DW. The chokeberry juice pomace used in this study had been produced by cold pressing of the berries, which explains the high anthocyanin content left in the waste. Based on this observation, chokeberry juice pomace is confirmed to have great potential as a feedstock for the recovery of anthocyanins, and the optimization of the process is of utmost relevance. In this section, extractions were performed with and without homogenization under constant process conditions as follows: aqueous citric acid content 0.75%; temperature 50 °C; and liquid-solid ratio 50 g solvent/g fresh pomace. In solid-liquid extractions from plant material, the rate limiting step is the diffusion of the solute from the solid matrix into the extraction solvent [48] . Decreasing the size of the solid particles is one effective way of increasing the contact area between solvent and solid matrix, that typically results in enhanced diffusion, higher extraction rates and reduced extraction times. Figure 1 shows an effective decrease in the size of particles in suspension during homogenization. After 30 min treatment, the distribution converged into a constant median diameter of ~ 200 µm, and with 80% of the particles volume within the range of 150 and 550 µm particle diameter. The homogenization treatment resulted in a much faster extraction process. As shown in Fig. 2 ., conventional extraction with stirring required 45 min to achieve a constant anthocyanin yield; while the tests with homogenization required only 5 min. It is important to note that for the homogenization study, the maximum yield was reached before the median particle diameter became constant (5 min and 30 min, respectively). Thus, it can be concluded that the particle size reduction obtained in the first 5 min of homogenization was already sufficient to maximize anthocyanin extraction. Despite this observation, further experiments were performed with 30 min homogenization in order to nullify the possible influence of particle size variations in the observed responses. This decision was made on the basis that prolonged homogenization treatment did not show a detrimental effect on the total anthocyanin extracted, as seen in Fig. 2 . Whereas the homogenization treatment clearly increased the rate of extraction, the overall extraction yield was not affected. The extractions performed with and without homogenization resulted in the same anthocyanin yield of ~ 41.8 mg/g DW, which corresponds to 66.5% of the total content in the raw material, as measured by exhaustive sequential extractions in acidified methanol. In this study, the extraction of anthocyanins from chokeberry pomace was optimized with regard to total anthocyanin content extracted and anthocyanin concentration in the extract. The parameters selected for the extraction optimization were the following: temperature, liquid-solid ratio and citric acid content in the water. In order to ensure reproducible particle size distribution, all extraction experiments were performed with a homogenization treatment of 30 min. Table 3 shows the independent factors and levels tested, as well as the experimental outputs observed of total anthocyanin content extracted, anthocyanin concentration in the extract, total dissolved solids and final pH of the extract. The response surface regression model selected was based on second-order quadratic equations, accounting for the relations between the independent factors and the meas- None of the three factors tested had a significant effect on the total anthocyanin content extracted at a 95% confidence level. Nevertheless, the highest influence observed was that of the LSR (p < 0.08), followed by the synergistic influence of LSR and citric acid (p < 0.08). The anthocyanin concentration was mainly influenced by the LSR (p < 0.05), with minor influences of the amount of citric acid used (p < 0.06) and the temperature (p < 0.07). The only significant synergistic effect was that of LSR and citric acid (p < 0.01). The total dissolved solids in the extracts were mainly influenced by the amount of citric acid used (p < 0.00001), the LSR (p < 0.001); and the synergy of the two (p < 0.05), as expected. Likewise, the final pH was also influenced by the citric acid content (p < 0.05) and the LSR (p < 0.05). In general, the liquid-solid ratio was the factor with the most significant effect on all the output variables, while the temperature had the least influence. This observation can be interpreted in two ways; either that the influence of the temperature in the mass transfer was masked by the higher influence of the LSR, or that the increase in anthocyanin extraction was counteracted by degradation. However, with the current empirical model, it is not possible to discern between the two possibilities. In order to identify selective anthocyanin degradation, the profile of products obtained at different temperatures was compared. Selective degradation of given anthocyanins in the extract may not be noticed when comparing the absolute total content extracted. However, selective degradation could undoubtedly affect the proportion of individual anthocyanins in the mixture. The effect of temperature on the profile of anthocyanins was investigated at the two ends of the temperature range of interest: 30 and 70 °C, and for a period double the selected extraction time (60 min). The following process conditions were kept constant: CA: 0.75 wt% and LSR: 50 g/g; in order to ensure that the pH in the extracts was the same (2.44 ± 0.01) and did not influence the outcome of the stability study. For clarity, the results focus on the two main anthocyanins in chokeberry: cyanidin-3-O-galactoside (cyd-3-gal) and cyanidin-3-O-arabinoside (cyd-3-ara) which combined account for approx. 92% of the anthocyanins in the extracts. Figure 3 shows that the proportions of cyd-3-ara and cyd-3-gal during the extractions performed at 30 °C remain constant for the whole 60 min period. However, at 70 °C cyd-3-ara decreases steadily and this is compensated by an increase in the proportion of cyd-3-gal. This observation confirms that the extraction temperature can have an influence, even minor, on the anthocyanin profile. In the scenario with highest temperature (70 °C), the variation in both components was less than 1% and was not considered a problem for the characteristics of the extract at this or lower temperatures. However, this influence should not be totally neglected, especially when operating at high temperatures or when operating with extracts composed of anthocyanins with different thermal stabilities. The optimal set of conditions for the extraction process would be one that maximizes the two responses related to the target product: the TAC and the [AC] . Figure 4 shows these two modelled responses as a function of pairs of variables, with the third one being fixed. Both TAC and [AC] are mainly influenced by the LSR. However, increasing LSR results in a higher amount of anthocyanin content extracted (higher TAC), but less concentrated extracts (lower [AC]). In this section, the two responses have been optimized independently and a compromise between the two has been reached for an overall optimal set of conditions. The individual optimization of TAC provided the optimal parameters of CA: 1.5 wt%; T: 47 °C; and LSR: 71 g/g, with predicted responses of 79.42% TAC and 240.7 mg/L [AC]. This results in an extract concentration that is, however, among the lowest reported in this study. Following this set of experimental conditions, consequent downstream processing into a final product could become costly and inconvenient. Individual optimization of [AC] provided the following parameters: CA: 1.5 wt%; T: 43 °C; and LSR: 20 g/g. Whereas CA and T were in close agreement with the optimal parameters for the TAC separate optimization, the optimal LSR of 20 g/g was, in this case, the lowest of the range studied (20-80); resulting in a somewhat lower TAC of 67.26%, but overall largest [AC] of 737.3 mg/L. In order to truly optimize the recovery method from a process-oriented perspective, a cost-effective analysis would need to consider not only the economy of the extraction process, but also the eventual product downstream purification. The choice of downstream unit operations will depend on the product formulation targeted in terms of product design. For instance, if the process envisions a final drying step, a moderate increase in TAC due to large solvent usage would probably not justify the high cost of concentration and evaporation associated to it, and an optimum LSR value should be allocated. In the case of other separation technologies like filtration, membrane distillation or molecular distillation, the extract volume to be treated will undoubtedly influence the dimensioning of the equipment as well as the operating costs. Adsorption-desorption processes, on the other hand, are not that much concerned with the LSR, since the product is selectively adsorbed in a solid phase and then desorbed in a new liquid stream of known volume. In the particular case of chokeberry anthocyanins, the solvent of choice in adsorption-desorption processes is an organic solvent [49, 50] . Even if this research work is focused on the extraction unit operation, the authors would like to emphasize the importance of a good synergy between the extraction process, downstream processing and product design. Given the scope of this study and since information about the economy of downstream alternatives was not available to the authors at present time, both outputs TAC and [AC] were given equal weight in the optimization. For this reason, [AC] was normalized to the highest [AC] reported in this study and is hence expressed as [AC] Norm (%). Figure 5 shows the variation of TAC and [AC] Norm as a function of LSR in the range studied, being the citric acid content and temperature fixed to 1.5% and 45 °C, respectively. The individual optimums obtained from the model are shown as TAC optimum and [AC] optimum at 71 and 20 LSR, respectively. The intersection of the two lines provides the overall combined optimum for the two outputs at an LSR value of 34 g/g. In a similar manner, Fig. 6 shows the predicted variation of pH and TDS also as a function of LSR. Even though the total dissolved solids and final pH were not prioritized in the optimization, these parameters have a strong influence on the characteristics of the product. First, the acidity level is of paramount importance for anthocyanin stability both during processing or storage [51] . Because of this, all extractions performed in this study had pH values equal or below 3.0, which is a reasonable threshold for acceptable anthocyanin preservation. On the other hand, the amount of total dissolved solids influences the anthocyanin concentration in the extract on dry basis, which is related to its purity. Furthermore, downstream processing may need to accommodate the characteristics of the extract in terms of desired TDS and acidity to given applications as a food ingredient or supplement. The combined optimization reported in this study presents a set of operation conditions that results in a compromise between the total component extracted and extract concentration. The mathematical model developed can be used as a predictive tool to establish relations between conditions and responses, and thus aid in the selection of optimal processing conditions for a given application in terms of product yield and composition. The combined optimal set of conditions selected (CA: 1.5 wt%, T: 45 °C, LSR 34 g/g) was replicated in the laboratory in order to validate the model. The experimental results were compared with the predicted output variables, as shown in Table 4 . In all cases, the experimental values were slightly lower than those provided by the model. TAC, TDS and pH final were predicted with a relative error < 10%, whereas for the anthocyanin concentration the relative error was of 18%, which is in agreement with the larger standard error of the predicted value. This research work presents the extraction of anthocyanins from chokeberry pomace as a case study for process optimization. The total anthocyanin content extracted and anthocyanin concentration in the extract were optimized using response surface methodology. The most influential factors were in both cases the liquid-solid ratio followed by the citric acid content in the solvent. The optimal conditions that maximized both outputs were the following: 1.5 wt% citric acid, 45 °C and 34 g solvent/g pomace. The optimized set of conditions was validated in the laboratory, resulting in 71 ± 3% of total anthocyanin content extracted, and an anthocyanin concentration in the final extract of 456.7 ± 19 mg/L. The experimental results observed for the optimal conditions deviated from the calculated outputs by a relative error lower than 10% for TAC, TDS and pH final , and of 18% for the anthocyanin concentration. An optimized extraction process is the first stone laid towards the building of a successful recovery process for a given feedstock. In this study, a compromise between total component extracted and extract concentration has been reached. However, economic considerations including the cost of downstream processing need to be taken into account in order to locate the optimal parameter conditions in an eventual cost-effective production. Nevertheless, the model developed in this study can be used as a prediction tool for the characteristics of chokeberry extracts produced by acidified water extraction, that can hopefully bring more attention to by-products or biowaste as renewable sources for the sustainable production of high-value products.
Since the appearance of the coronavirus disease 2019 (COVID-19) outbreak in late December in Wuhan (China), there has been an increasing number of publications regarding the potential neurotropism of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). While COVID-19 primarily affects the respiratory targets [1] [2] [3] , neurological alteration and damage are not uncommon. Early studies from China have reported various neurological symptoms (including anosmia, headache, myalgia) and Central Nervous system (CNS) complications (cerebrovascular events, encephalopathy, and neuroinflammatory syndromes) in a large number of COVID-19 patients [4] . SARS-CoV-2 is similar in structure and pathogenesis to the other β-coronavirus family members, namely Severe Acute Respiratory Syndrome (SARS-CoV) and the Middle East Respiratory Syndrome (MERS). Similarity in structure and pathogenesis predicts similarity in clinical presentation and symptomatology. Prior experimental researches have established the neurotropism and neuroinvasiveness of coronavirus strains in previous SARS and MERS epidemics, even in the absence of pulmonary manifestations. SARS-CoV-2 has been suggested to have similar, but higher affinity for CNS targets [5] . Although it remains unsolved how coronavirus affects the human neuronal system, two main pathophysiological hypotheses seem to point the most likely explanations. One suggested mechanism is based on the direct viral invasion through hematological, transcribrial, and neuronal retrograde dissemination pathways. The other possible mechanism is via hyperimmune-related reactions. Activation of inflammatory cells and cytokines precede cytokine storms, resulting in the activation of coagulation cascades, disseminated intravascular coagulation (DIC), and multiple end-organ failures, including CNS complications. In this mini-review, we aim to summarize some of the most common imaging findings seen in patients with COVID-19. However, it is important to bear in mind that the exact relationship with SARS-CoV-2 has not yet been fully established. Indeed, while COVID-19 associated neurological features offer a possible causal or synergetic relationship between cerebral events and SARS-CoV-2 infection, coincidental events (rather than casual association) might explain some of these imaging results. In fact, due to high prevalence of the COVID-19 in communities, co-incidence of the infection with other diseases is an expected and highly likely phenomenon. In our prior review on 20 studies consisting of 90 patients with neuro-COVID-19 symptoms (among 116 patients with coronavirus family infection) [6] , 37 (41%) patients had normal imaging studies (brain CT or MRI). Amongst those who displayed abnormal neuroimaging findings (59%), vascular thrombosis, cortical signal abnormalities, hemorrhage, hemorrhagic posterior reversible encephalopathy syndrome (PRES), acute hemorrhagic necrotizing encephalopathy (ANE), meningitis/encephalitis, and acute disseminated encephalomyelitis (ADEM) were the most common reported abnormalities [6] . Similarly, in a recent observational study [7] on 73 patients who presented with neurological symptoms, 41.1% demonstrated normal brain MRI. The remainder 58.9% had various neuroimaging abnormalities, including acute ischemic infarcts (23.3%), cerebral venous thrombosis (1.4%), multiple microhemorrhages (11.3%), perfusion abnormalities (47.7%), multifocal white matter lesions (5.5%), basal ganglia lesions (5.5%), meningeal enhancement (4.8%), central pontine myelinolysis (4.1%), hypoxia-induced lesions (4.1%), restricted diffusion foci within the corpus callosum consistent with cytotoxic lesions of the corpus callosum (CLOCC, 4.1%), PRES (2.7%), and neuritis (2.7%). The authors have also found that two imaging patterns of multifocal white matter lesions and basal ganglia abnormities (observed primarily in ICU patients with a more severe illness) were related more explicitly to SARS-CoV-2 infection itself rather than the disease complications. In another multicenter observational study by Kermer et al. on 37 patients [8] , the most frequent white matter abnormalities were described in three distinct patterns: signal abnormalities in the medial temporal lobe (43%), non-confluent multifocal white matter hyperintense lesions on FLAIR and diffusion associated with hemorrhages (30%), and white matter microhemorrhages J o u r n a l P r e -p r o o f (24%). They stated medial temporal lobe signal abnormalities were similar to those found in viral or autoimmune encephalitis, whereby patterns 2 and 3 presented microhemorrhages. They also found that the presence of hemorrhage would worsen the prognosis in these patients. Numerous publications have already ascertained the thrombo-inflammatory nature of SARS-Cov-2 infection [9, 10] . The elevated production of coagulopathy factors such as fibrinogen, platelet, D-dimer, and inflammatory cytokines (interleukin-6), along with capillary endothelial damage, predispose patients to thromboembolic events, leading to stroke, thrombosis, and hemorrhage. Therefore, it is not surprising that stroke-related imaging findings are among the most frequent abnormalities seen in patients with neuro-COVID symptoms. Findings such as territorial acute/subacute infarction, multiple ischemic foci, evidence of thrombus in large intra and extracranial vessels, cerebral venous thrombus complicated by hemorrhagic infarcts, and cortical/subcortical microhemorrhages have been reported frequently in various observational studies [6, 11] . The precise pathophysiology of white matter microhemorrhages in COVID-19 patients remains unclear. It has been suggested that they might be related to diffuse endothelial dysfunction, secondary to the direct viral invasion of the endothelial cells (via ACE-2 receptors) and the subsequent endothelial inflammation. GBS has been reported as a significant parainfectious peripheral neurological sequelae of SARS-CoV-2 infection, similar to other coronavirus strains. These patients generally have normal MRI studies of the central nervous system, although post-contrast enhancement in the cervical and lumbar nerve roots has been reported frequently [7] . COVID-19 associated GBS mainly develops within a few days to weeks of established viral infection. This temporal relationship reflects a post-infective immune-mediated process as the primarily responsible mechanism. In a review by Caress et al. [12] on 37 cases of GBS associated with COVID-19, the mean time interval between COVID-19 symptoms and GBS onset was 11 days. Brain imaging had been performed in fewer than half of patients (14/37), which displayed cranial nerve abnormalities only in 28% of them. Furthermore, spine imaging was obtained in 15 cases, with 40% showing abnormal features, including root enhancement, radiculitis, leptomeningeal enhancement, and myelopathy. MRI perfusion abnormalities are observed in a large number of individuals presenting with COVID-19 associated neurologic signs. These abnormalities might be related to hypoxic-ischemic events and seizures, or present as an isolated finding [7] . In Helms et al. study, bilateral frontotemporal hypoperfusion, leptomeningeal enhancement, and stroke-related abnormalities were the most conspicuous neuroimaging findings in COVID-19 patients presenting with a severe illness [13] . A range of COVID-19 associated inflammatory features is identified on radiological exams, including post-infectious ADEM, acute hemorrhagic leukoencephalitis, myelitis, and autoimmune encephalitis. As mentioned earlier, virus-induced vasculopathy and coagulopathy, direct invasion of CNS, hyperinflammatory reactions and post-infective autoantibodies might partly explain these findings. In terms of radiologic findings, unilateral FLAIR hyperintensity and/or diffusion restriction in the medial temporal lobe have been frequently reported in these patients, similar to changes with autoimmune limbic encephalitis [11] . Similar imaging results have been featured in the concept of meningitis/encephalitis in a few case series [6, 14] . PRES, multifocal white matter abnormalities, basal ganglia lesions, and leptomeningeal enhancement have also been named as relevant neuroimaging manifestations in COVID-19 [11, 15] . Again the vasculitis-like phenomenon and inflammatory cascades, secondary to COVID-19induced CNS damages, are the likely responsible mechanism for these findings [16] . Nuclear medicine operations are not routinely employed in COVID-19 patients [17] . However, it is assumed that molecular imaging with FDG PET-CT is potentially able to add valuable data regarding the pathophysiological basis of the disease [18, 19] . In this era, a few reports have described metabolic abnormalities in patients presenting with neuro-COVID-19 manifestations. In a case series by Delorme et al. [20] brain FDG PET-CT was performed in four COVID-19 patients with possible immune encephalitis. PET displayed a consistent pattern of metabolic abnormalities in all patients: hypometabolism in the prefrontal or orbito-frontal cortices and hypermetabolism in the cerebellar vermis. None of these patients had specific MRI features nor significant cerebrospinal fluid (CSF) abnormalities. Regarding the PET-CT findings and the negative CSF SARS-CoV-2 PCR results, the authors suggested a parainfectious cytokine storm or immune-mediated process rather than a direct neuroinvasion mechanism. Other case reports have issued similar findings. Grimaldi et al. [21] , demonstrated diffuse cortical hypometabolism, associated with putaminal and cerebellum hypermetabolism in autoimmune encephalitis concomitant with SARS-CoV-2 infection. In another case study, anosmia of COVID-19 was evaluated using FDG PET-CT. They found hypometabolism of the left orbitofrontal cortex in a COVID-19 patient, representing with persistent isolated anosmia [22] . These findings, along with normal morphological data on MRI, might suggest reduced neuronal activity and functional alterations in neuro-COVID-19 patients. However, further studies (specifically using BOLD functional MRI) are still needed in this regard before we draw a definite conclusion. Brain imaging should be considered in the diagnostic work up of those COVID-19 patients who present with neurological symptoms. In this regard, radiologists and clinicians should be familiar with the spectrum of neuroimaging findings in COVID-19 to detect and understand the disease process, and to evaluate its progression during the course of treatment. Yet, further studies are still warranted regarding the long-term neurological sequel and prognostic implications in COVID-19.
Novel coronavirus pneumonia (NCP) is mainly transmitted through respiratory droplets and close contact. The World Health Organization (WHO) lists NCP as a public health emergency of international concern [1] and officially named the disease caused by the novel coronavirus disease 2019 (COVID-19) [2] . Characteristic chest CT imaging patterns, positive nucleic acid detection in nasal and throat swab samples, normal or decreased numbers of peripheral white blood cells, decreased numbers of lymphocytes and increased levels of inflammatory cytokines are the key factors in the diagnosis of COVID-19 [3] . There are many kinds of lymphocytes in human peripheral blood, including CD3+CD4+ helper T lymphocytes (CD4+ T lymphocytes) and CD3+CD8+ cytotoxic T lymphocytes (CD8+ T lymphocytes). The percentage of CD4+ T lymphocytes and the ratio of CD4+/CD8+ lymphocytes are decreased in HIV-infected patients. To investigate whether the peripheral blood lymphocytopenia in COVID-19 patients was mainly caused by the decrease in CD4+ T lymphocytes, in this study, patients admitted to our hospital with different severities of COVID-19 were examined as subjects. The total number of lymphocytes, the percentages of lymphocyte subtypes and the levels of inflammatory cytokines (TNF-α and IL-6) secreted by CD4+ helper T lymphocytes in the peripheral blood were detected by hematology counter and flow cytometer, respectively. The number of CD4 + lymphocytes and the ratio of CD4+/CD8+ lymphocytes were calculated and observed, and a new method for studying the mechanism of 2019-nCoV-induced lymphocyte reduction was provided. A total of 35 patients (26 men and 9 women; age range: 8-70 years) with COVID-19 who were admitted to The Affiliated Infectious Diseases Hospital of Nanchang University were evaluated. According to the diagnostic and clinical classification criteria of the Novel Coronavirus Infection Prevention and Control plan (2nd Edition) issued by the National Health Commission of the People's Republic of China, all the patients were divided into the general COVID-19 group (n = 13, 10 men and 3 women), severe COVID-19 group (n = 12, 9 men and 3 women) and critical COVID-19 group (n = 10, 8 men and 2 women). The normal control group (n = 20, 15 men and 5 women) included normal physical examination personnel. There were no significant differences in the ratio of men to women and age among the groups (p>0.05). The clinical symptoms of patients with COVID-19 include fever, cough, sore throat and muscle ache. The results of the 2019-nCoV nucleic acid tests performed on throat swabs were all positive in all the patient groups, and CT showed the changes characteristic of viral pneumonia. Laboratory examination showed that the white blood cell counts in the peripheral blood were normal or decreased, and the total number of lymphocytes was decreased. All the patients with COVID-19 had a history of travel to Wuhan, Hubei Province, or to an epidemic area outside Wuhan, Hubei Province. None of the patients died during the study, and the results of the 2019-nCoV nucleic acid tests in the normal control group were all negative. The study protocol was approved by the ethics committee of the Affiliated Infectious Diseases Hospital of Nanchang University (approval no. 202005, Nanchang, China), and written informed consent was obtained from all participants when the participants were awake or from their next of kin when the participants were minors or/and comatose. None of the patients or normal controls suffered from AIDS, influenza virus A or B infection, hepatitis B virus infection, tuberculosis infection, autoimmune diseases or other types of pneumonia. Throat swabs from the patients were tested for the presence of 2019-nCoV nucleic acids by RT-PCR on an ABI7500 PCR amplification instrument (Applied Biosystems Inc., California, USA). The conserved gene sequences of open reading frame (ORF1a/1b) and nucleocapsid protein (N) in the 2019-nCoV genome were used as targets. When the ORF1a/1b and N target fragments were both positive at the same time, and the rising peaks were obvious, the nucleic acid test result was determined to be positive. The 2019-nCoV nucleic acid RT-PCR test kit (including the amplification primers) was produced by Shanghai BioGerm Biotechnology Co., Ltd (Shanghai). The National Machinery Registration number of the test kit is 20203400065. The catalog number of the RT-PCR test kit is ZC-HX-201-2. The total numbers of peripheral white blood cells and lymphocytes in the patients were detected by impedance combined with laser on a BC-6900Plus automatic hematology analyzer (Shenzhen Mindray Biomedical Electronic Co., Ltd., Shenzhen, China). The detection reagent for peripheral blood cell analysis was purchased from Shenzhen Mindray Biomedical Electronic Co., Ltd (Shenzhen, China). The name of the cell staining solution used for blood cell analysis is M-60FN staining solution (Mindray), which product number is 105-012183-00; The name of hemolysin for hematology analysis is M-60LD hemolysin (Mindray), which product number is 105-012177-00; The name of the diluent used for blood cell analysis is DS diluent (Mindray), which product number is 105-005707-00. The blood cell control materials were produced by Beckman Coulter Co., Ltd. The name of Control materials of blood cells used for blood cell analysis is COULTER 5C CELL Control (Beckman Coulter), which product number is 7547001. The quality control data of the blood cell control materials were used as the control. The total number of lymphocytes was one of the results obtained by routine blood cell analysis. The detection of the subtypes of lymphocytes in the peripheral blood of the patients was analyzed on a Beckman Coulter DxFLEX flow cytometer (Suzhou Saijing Biotechnology Co., Ltd, Suzhou, China). The lymphocyte population was gated based on brightly positive CD45 staining and low SSC in the CD45 vs SSC dot plot. Then, the CD3 vs SSC dot plot shows the lymphocyte population, and the T lymphocyte population was gated based on brightly positive CD3 staining. Then, suppressor/cytotoxic (CD3+CD8+) and helper/inducer (CD3+CD4+) lymphocytes were identified in the CD8 vs CD3 dot plot and the CD4 vs CD3 dot plot, respectively. Ten microliters of anti-CD molecule antibodies labeled with different fluorescent dyes and 50 μl of EDTA anticoagulant-treated peripheral blood were fully mixed, protected from light, and incubated at room temperature (19-25˚C) for 15-20 minutes. Then, 500 µl of hemolysin(Beckman) was added to the solution, and the solution was vortexed for 10 seconds, mixed well, protected from light, and incubated at room temperature (19-22˚C) for 15-20 minutes until the solution changed from turbid to clear. Then, 200 µl of saline was added, and the solution was mixed well for detection by flow cytometry. The test kits used to analyze the lymphocyte subtypes, including anti-CD molecule antibodies labeled with different fluorescent dyes, were produced by Beckman Coulter Co., Ltd. (740 West 83rd St., Hialeah, FL 33014, USA). The production approval document number (China) for the lymphocyte subtype test kits was guoxiezhujin20173401372. The medical device production license number (China) of the BECKMAN COULTER DxFLEX Flow cytometer is sushiyaojianxieshenchanxu20130069. The catalog number of the test kits named CYTO-STAT tetra CHROME for detection of lymphocyte subtype(Beckman Coulter Co., Ltd), which including CD45, CD3, CD4 and CD8 antibodies is 6607013. The name of hemolysin reagent is Opti Lyse C(Beckman), which catalog number is A11895. All antibodies were undiluted for use in this study. The absolute number of CD4+ T lymphocytes was calculated as the total number of lymphocytes × the percentage of CD4+ T lymphocytes. The total number of lymphocytes in the patients was determined from the results of routine blood cell analysis. The levels of TNF-α and IL-6 in the plasma of the patients were simultaneously determined by cytometric bead array (CBA) via flow cytometry. Then, 3-5 mL of heparin anticoagulant-treated blood from the patients was centrifuged to obtain plasma. Twenty-five microliters of immune microspheres and 25.0 μL of patient plasma were mixed and incubated in the dark at room temperature for 2.5 hours. Then, 25 μL fluorescent detection reagent was added. After washing the microspheres, the fluorescence intensity of the washed microspheres was quantitatively measured by a Beckman Coulter DxFLEX flow cytometer (Suzhou, China). Different concentrations of TNF-α and IL-6 standards were analyzed together with the plasma to be tested. The fluorescence intensity of the microspheres was determined by flow cytometry, and the fluorescence intensity was in direct proportion to the concentrations of TNF-α and IL-6 examined. The test kit used for the detection of TNF-α and IL-6 in plasma by CBA was produced by Jiangxi Saiji Biotechnology Co., Ltd. (Nanchang, China), which national machinery registration number is 20180010. The human plasma TNF-α and IL-6 detection kit is named human Th1 and Th2 subgroup detection kit(Jiangxi Saiji), which product number is P010001. The antibodies of TNF-α and IL-6 were undiluted for use in this study. All the data are expressed as the mean ± standard deviation (SD). Comparison of continuous variables between two groups was performed using Student's t-test. P < 0.05 was considered to be statistically significant. All the statistical analyses were performed using IBM SPSS Statistics 23.0 software for Windows (SPSS Inc., Chicago, IL, USA) on a computer. In the COVID-19 patients in the general, severe and critical groups, the numbers of peripheral lymphocytes and CD4+ T lymphocytes and the ratio of CD4+/CD8+ lymphocytes were significantly lower than those in the normal control group. The levels of peripheral lymphocytes and CD4+ T lymphocytes and the ratio of CD4+/CD8+ lymphocytes in the general group were lower than those in the normal control group (p = 0.000441252, 0.000404213, and 0.00361 3912, respectively, all <0.01), and the levels in the severe group were lower than those in the general group (p = 0.009585116, 0.000294487, and 0.004389093, respectively, all <0.01). The levels in the critical group were lower than those in the severe group (p = 0.00011258, 0.001189364, and 0.00764968, respectively, all <0.01). All the differences were significant, with P<0.01 (Fig 1) . The correlation coefficient of the number of peripheral blood CD4+ T lymphocytes and the number of total lymphocytes in the 35 COVID-19 patients was 0.9051 (p = 0.000002, <0.01). In the patients with COVID-19, the decreased number of CD4+ T lymphocytes was positively correlated with the decreased number of total lymphocytes. The lower the number of CD4+ T lymphocytes was, the lower the number of total lymphocytes was. In the patients with COVID-19, the total number of peripheral blood lymphocytes decreased mainly as the number of CD4+ T lymphocytes decreased. The levels of peripheral CD8+ T lymphocytes in the general group were lower than those in the normal control group (p = 0.02, <0.05), and the levels in the severe group were lower than those in the general group (p = 0.007, <0.01). The levels in the critical group were lower than those in the severe group (p = 0.002, <0.01). All the differences were significant, with p<0.05 (Fig 2) . The TNF-α and IL-6 levels in the peripheral blood of COVID-19 patients in the general, severe and critical groups were significantly higher than those in peripheral blood of the subjects in the normal control group. The levels of TNF-α and IL-6 in the general group were higher than The total numbers of lymphocytes and CD4+ lymphocytes and the ratio of CD4+/CD8+ lymphocytes in the patients in the general COVID-19 group were significantly lower than those in the normal control group. The levels in the severe COVID-19 group were significantly lower than those in the general COVID-19 group, and the levels in the critical COVID-19 group were significantly lower than those in the severe COVID-19 group. The total number of lymphocytes (10 9 cells/L) and CD4+ lymphocytes (cells/µL) and the ratio of CD4+/CD8+ lymphocytes of the patients in the normal control group were all set to 1.0. �� p<0.01. https://doi.org/10.1371/journal.pone.0239532.g001 The total number of CD8+ lymphocytes in the patients in the general COVID-19 group was significantly lower than that in the normal control group, the levels in the severe COVID-19 group were significantly lower than those in the general COVID-19 group, and the levels in the critical COVID-19 group were significantly lower than those in the severe COVID-19 group. The total number of lymphocytes (10 9 cells/L) and CD4+ lymphocytes (cells/µL) and the ratio of CD4+/CD8+ lymphocytes of the patients in the normal control group were all set to 1.0. � p<0.05; �� p<0.01. https://doi.org/10.1371/journal.pone.0239532.g002 those in the normal control group (p = 0.00000209 and 0.00001729, respectively), and the levels in the severe group were higher than those in the general group (p = 0.00007539 and 0.00004138, respectively). The levels in the critical group were higher than those in the severe group (p = 0.00003602 and 0.00006652, respectively). All the differences were significant, with P<0.01 (Fig 3) . Coronaviruses (CoV) are divided into four genera, including α−/β−/γ−/δ-CoV. The 2019-nCoV that causes COVID-19 is a β-coronavirus, which is an enveloped, positive singlestranded RNA (ssRNA) coronavirus. The shape of 2019-nCoV is round or oval with a diameter of 60-140 nm, as observed by electron microscopy. The human-to-human transmission of 2019-nCoV is clear. It was found that the genomic sequence of 2019-nCoV shares more than 85% identity with that of SARS-CoV [4] . ACE2 (angiotensin-converting enzyme 2), which is expressed in the lower respiratory tract of humans, is confirmed as a cellular receptor for 2019-nCoV [5] ; this receptor mediates cellular entry of the virus and causes severe and potentially fatal respiratory tract infections. The incubation period is 1-14 days, mostly 3-7 days. The main clinical manifestations are fever, headache, dry cough, fatigue and muscle ache. Some patients have symptoms such as nasal congestion, pharyngeal pain and diarrhea. The clinical types of COVID-19 include the general type, severe type and critical type. Patients with severe and critical conditions can rapidly develop respiratory failure, septic shock and multiple organ failure after approximately 1 week [6] . All the patients in our study exhibited typical clinical manifestations, such as fever, cough, sore throat and muscle ache. Critical patients require mechanical ventilation with a ventilator, and severe patients have dyspnea, along with the characteristics of severe viral pneumonia observed on CT images. The clinical symptoms of general type patients are mild, and the lung CT showed characteristic images of mild viral pneumonia. Pulmonary CT imaging can diagnose viral pneumonia, but determining the cause of the disease requires 2019-nCoV nucleic acid detection. All of our subjects were positive The TNF-α and IL-6 levels of the patients in the general COVID-19 group were significantly higher than those in the normal control group. The levels in the severe COVID-19 group were significantly higher than those in the general COVID-19 group, and the levels in the critical COVID-19 group were significantly higher than those in the severe COVID-19 group. The TNF-α and IL-6 levels in the peripheral blood of patients in the normal control group were all set to 1.0. �� p<0.01. https://doi.org/10.1371/journal.pone.0239532.g003 according to a throat swab 2019-nCoV nucleic acid test, suggesting that the diagnosis of COVID-19 was clear and that there was no misdiagnosis. It was reported that the total number of peripheral blood (PB) lymphocytes in patients with COVID-19 is decreased [7] . To study which type of lymphocyte exhibits the greatest decrease, we analyzed the lymphocyte subtypes in the PB of patients with COVID-19. The total number of lymphocytes was tested by routine blood tests, the percentage of lymphocyte subtypes was determined by flow cytometry, and the absolute number of lymphocytes in each subtype was calculated. The results showed that the lymphocytopenia in patients with COVID-19 was mainly manifested by the decrease in CD4+ helper T lymphocytes (the absolute number); in addition, the ratio of CD4+/CD8+ lymphocytes decreased, which showed a typical cellular immune dysfunction similar to that observed in HIV patients. A CD4+ T lymphocyte count below 200 cells/µL in the peripheral blood is the basis for antiviral treatment in HIV patients. Although there is no direct relationship between the genomes of the two viruses [8] , the results of this study suggest that for COVID-19 patients with decreased CD4+ T lymphocyte numbers in their peripheral blood, the protease inhibitors used to treat human immunodeficiency virus (HIV) infection, such as lopinavir and ritonavir, are also worth trying and studying to improve the outcome of patients with COVID-19. CD4+ T lymphocytes are mainly helper T lymphocytes (Th) and include Th1 and Th2, which can secrete inflammatory cytokines. At present, the specific mechanism by which CD4 + T lymphocyte numbers decrease in the peripheral blood of patients infected with SARS--CoV-2 is unclear. There are 4 possible mechanisms. (1) 2019-nCoV directly damages CD4+ T lymphocytes but does not invade CD4+ T lymphocytes. (2) 2019-nCoV directly invades CD4 + T lymphocytes and takes CD4+ T lymphocytes as host cells. (3) Patients with COVID-19 develop viremia and progress to systemic inflammatory response syndrome (SIRS). CD4+ T lymphocytes or other immune cells secrete a large amount of inflammatory cytokines, resulting in excessive consumption of CD4+ T lymphocytes. (4) SARS-CoV-2 inhibits the differentiation and production of CD4+ T lymphocytes. At present, there is no evidence that 2019-nCoV can directly invade peripheral blood CD4+ T lymphocytes, and there is no evidence that a 2019-nCoV receptor exists on the CD4+ T lymphocyte membrane. The maturation and formation of CD4+ T lymphocytes depend on the function of human thymus tissue cells, which can express the ACE2 receptor [9] , suggesting that the damage to human thymus cells caused by 2019-nCoV may be a cause of the decrease in peripheral blood lymphocytes in patients with COVID-19. The inflammatory cytokine storm plays an important role in the development of COVID-19 in patients [10] . TNF-α and IL-6 are important inflammatory cytokines, and we found that the levels of TNF-α and IL-6 in the plasma of patients with COVID-19 were significantly increased and positively correlated with the severity of the disease. The significant increase in the levels of TNF-α and IL-6 in the plasma of patients with COVID-19 is related to many kinds of inflammatory immune cells that secrete inflammatory factors. The secretion of inflammatory factors may require the assistance of CD4+ cells, which can secrete a large amount of inflammatory cytokines themselves. Our results suggest that the decrease in the number of CD4+ T lymphocytes in patients with COVID-19 may be related to the excessive consumption of CD4+ T lymphocytes. How to effectively increase the number of CD4+ T lymphocytes in the peripheral blood of patients with COVID-19 still requires further study. We found that the CD4+ T cells in the peripheral blood were decreased in individuals infected with COVID-19 but that the levels of TNF-α and IL-6 in the plasma, which can be produced by CD4+ T cells, were significantly increased. The reason may be that in addition to CD4+ Th cells in the peripheral blood, a large number of monocytes in the blood and macrophages in human tissues, including the liver and intestine, can secrete large amounts of TNF-α and IL-6 into the blood in humans. The number of CD4+ Th cells in the peripheral blood decreased; however, this decrease does not mean that the abilities single CD4+ Th cells to secrete TNF-α and IL-6 decreased. The number of CD4+ Th cells in the peripheral blood significantly decreased, but the plasma levels of inflammatory cytokines significantly increased in patients with COVID-19; this finding may be related to the increased secretion of TNF-α and IL-6 by a large number of immune cells other than CD4+ Th cells in the peripheral blood or the enhanced ability of single CD4+ Th cells to secrete TNF-α and IL-6 during 2019-nCoV infection. We detected a decrease in the number of CD4+ Th cells in the peripheral blood, but this decrease does not mean that the number of CD4+ Th cells in human tissues, such as lymph nodes or spleen, has decreased. The number of CD4+ Th cells in the peripheral blood significantly decreased, but the plasma levels of inflammatory cytokines significantly increased in patients with COVID-19, which may also be related to the fact that CD4+ T cells are sequestered in tissues and therefore are not detected in the blood. The targeting of thymus tissue by 2019-nCoV may be an important reason for the decrease in CD4+ Th cells in the peripheral blood of COVID-19 patients. Many immune cells in the blood secrete TNF-α and IL-6, especially during infection and inflammation, and monocytes and macrophages in the peripheral blood and tissues mainly secrete large amounts of TNF-α and IL-6. This phenomenon is also the main reason why although the number of CD4+ Th cells in the peripheral blood significantly decreased, the plasma levels of TNF-α and IL-6 significantly increased in patients with COVID-19. The secretion of TNF-α and IL-6 by CD4 + Th cells in the peripheral blood may not be the main reason for the significant increase in TNF-α and IL-6 in the peripheral blood of COVID-19 patients. Our study also found that not only the absolute number of CD4+ Th cells but also the absolute number of CD8+ Tc cells decreased in the peripheral blood. However, the decrease in the number of CD8+ Tc cells is not as substantial as that of CD4+ Th cells, resulting in a trend of decreasing ratios of CD4+/CD8+ lymphocytes in the peripheral blood with the severity of the disease. CD8+ Tc cells are cytotoxic T lymphocytes and belong to one of the subtypes of T lymphocytes. The differentiation and maturation of all T lymphocytes in the body, including CD4+ and CD8 + T lymphocytes, is dependent on thymocytes; therefore the targeting of thymus tissue by 2019-nCoV may be an important reason for the decrease in the number of CD8+ Tc cells in the peripheral blood of COVID-19 patients. Further study is needed to investigate the mechanism by which CD4+ and CD8+ T cell numbers decrease in the peripheral blood of COVID-19 patients. In patients with COVID-19, the number of peripheral blood lymphocytes decreased, mainly manifesting as a decrease in the number of CD4+ T lymphocytes, a decrease in the ratio of CD4+/CD8+ lymphocytes, and a decrease in the number of CD8+ lymphocytes; the degrees of these reductions was significantly correlated with the severity of disease. The levels of TNF-α and IL-6 in the peripheral blood were significantly increased in COVID-19 patients, the degree of this elevation was significantly correlated with the severity of disease. The significantly decreased levels of CD4+ T lymphocytes and ratio of CD4+/CD8+ lymphocytes and the significantly increased levels of TNF-α and IL-6 in the peripheral blood can be used as important laboratory indicators to assess the severity of COVID-19 in patients. Supporting information S1 Table. The experimental data of the numbers of lymphocytes, CD4+ T lymphocytes, and CD8+ lymphocytes and the ratio of CD4+/CD8+ lymphocytes in the peripheral blood of patients with different severities of COVID-19. Compared with the normal control group, � p<0.05, �� p < 0.01. (DOC) S2 Table. The experimental data of the levels of TNF-α and IL-6 in the peripheral blood of patients with different severities of COVID-19. Compared with the normal control group, �� p < 0.01. (DOC)
Coronavirus disease 2019 (COVID- 19) , the disease caused by severe acute respiratory coronavirus 2 (SARS-CoV-2), was first detected in late 2019 in Wuhan, China [1] . Since its emergence, COVID-19 has spread globally, causing massive morbidity and mortality worldwide [2] . The World Health Organization declared COVID-19 a public health emergency of international concern on 30 January 2020, and subsequently labeled it a pandemic on 12 March 2020 [3] . To date, there have been more than 18 million confirmed infections worldwide with over 675,000 deaths attributed to COVID-19 [2] . Early in the pandemic, evidence indicated minority groups and those with lower socioeconomic position suffered disproportionately from COVID-19, both in the United States (US) and abroad [4, 5] . Historically, these same groups experienced an inordinate burden of disease, both infectious and reliability, the original data source for CHR was located and used when available. If information was unavailable from the original data source, those county statistics were estimated by the mean of the geographically surrounding counties, such as the food environmental index. For rare events where data was suppressed due to small numbers, such as infant mortality, they were estimated as zero. The COVID-19 case definition for the GA DPH was an individual with positive molecular testing for SARS-CoV-2. Cases recorded by the GA DPH were reported through electronic lab reporting and the state electronic notifiable disease surveillance system, as well as via calls or faxes from providers [23] . The continuous dependent variable in our study was the cumulative number of confirmed cases per 100,000 residents in a county, as publicly reported by the GA DPH on 1 August 2020. Data for cases per 100,000 residents was log-transformed for normality before analysis. Cases were excluded from our analysis if they did not have a known county of residence in Georgia at the time of case reporting. Descriptive statistics, including mean, median, and standard deviation for all variables were calculated. Variables where data could not be ascertained or accurately estimated were omitted from the model selection process. CHR rankings for access to care, such as primary care physician rate, dentist rate, and mental healthcare provider rate, were excluded from analysis due to the small geographic county size and the possibility of residents from adjoining counties sharing providers. Racial demographics were separated into minority or non-Hispanic White. Table S1 reports the included and excluded variables in our analysis and their level of inclusion or exclusion in the county health rankings. Initial multivariable regression analysis revealed significant multicollinearity, which was not easily rectified using standard techniques related to variance inflation, condition index, and variance proportion diagnostics. A predictor collinearity matrix (Figure 1 ), created using the statistical programming language R v.3.6.3 (R Core Team, R Foundation for Statistical Computing, Vienna, Austria) revealed this multicollinearity, justifying the use of a technique other than the best subset multivariable regression analysis. Lasso (least absolute shrinkage and selection operator) regression analysis was used due to the number of predictors and overlap of variables. Figure 1 confirms that the selection of lasso for analysis was appropriate as opposed to other coefficient shrinkage techniques, such as elastic net [25] . By using coefficient shrinkage to zero, lasso variable selection allowed for the automated determination of the most important variables in a group of collinear determinants where traditional least-squares linear regression models failed, and variance of the least-squares estimators was unacceptably high. Proposed by Tibshirani in 1996 [26] , lasso has been used in a variety of settings with similar sets of variables for outcomes with complex sets of underlying predictors [27] [28] [29] . Lasso's prediction of variables was improved not only in settings where significant multicollinearity occurred but also when many predictors may be contributing small to moderate effects. Hence this technique is useful in the analysis of large data sets that include variables such as demographics, housing statistics, and economic indicators, which often overlap both within groups of variables as well as between groups. Lasso analysis was performed in SAS v9.4 (SAS Institute Inc. Cary, NC, USA) using the PROC GLMSELECT procedure and the default Schwarz Bayesian Information Criterion (SBC) for variable selection. A sensitivity analysis was also performed by applying the same procedures to data from 1st April, 1st May, 1st June and 1st July, which provided multiple cross-sectional analyses to understand how predictive variables and overall predictive ability changed over time as the virus spread throughout the state. A sensitivity analysis was also performed by applying the same procedures to data from 1st April, 1st May, 1st June and 1st July, which provided multiple cross-sectional analyses to understand how predictive variables and overall predictive ability changed over time as the virus spread throughout the state. As of 1st August 2020, the confirmed cumulative rate reported by the DPH for the state of Georgia was 1726.62 per 100,000 residents, with 190,012 cumulative cases diagnosed in the state. The mean and median case rate for counties once non-residents and patients with unknown residency status were excluded was 1748.34 and 1538.46 per 100,000 residents, respectively. Cases per 100,000 residents varied from 612.60 in Long County to 6140.11 in Chattahoochee County. Long County reported 19,915 residents according to the DPH [23] and was 81% rural according to the CHR [25] . Chattahoochee County reported 10,749 residents and was 30% rural. The outcome of cumulative COVID-19 cases per 100,000 at the conclusion of the study are also shown by county in Figure 2 . As of 1 August 2020, the confirmed cumulative rate reported by the DPH for the state of Georgia was 1726.62 per 100,000 residents, with 190,012 cumulative cases diagnosed in the state. The mean and median case rate for counties once non-residents and patients with unknown residency status were excluded was 1748.34 and 1538.46 per 100,000 residents, respectively. Cases per 100,000 residents varied from 612.60 in Long County to 6140.11 in Chattahoochee County. Long County reported 19,915 residents according to the DPH [23] and was 81% rural according to the CHR [25] . Chattahoochee County reported 10,749 residents and was 30% rural. The outcome of cumulative COVID-19 cases per 100,000 at the conclusion of the study are also shown by county in Figure 2 . Descriptive statistics, including mean, median, and standard deviation for case rates and the twelve variables chosen by lasso analysis on 1 August 2020, are shown in Table 1 . Mean, median, and standard deviation for all independent variables considered are shown in Table S2 . The overall Pearson's correlation coefficient on 1 August 2020, was 0.4940 with an adjusted r-squared of 0.4525, indicating the final model had a moderate correlation with cumulative case rates by county. Lasso and other coefficient shrinkage methods eliminated some variables from analysis due to coefficient shrinkage to zero. Therefore, these standardized coefficients (βz) can be interpreted as they relate to each other within the model but should not be interpreted directly in terms of the dependent variable. Unlike traditional ordinary least squares regression, the coefficients do not directly represent a percent change in cumulative case rates in our model. Lasso analysis was used in our case to choose variables in the face of collinearity, as well as to identify variables that may only have a mild to moderate association. Descriptive statistics, including mean, median, and standard deviation for case rates and the twelve variables chosen by lasso analysis on 1st August 2020, are shown in Table 1 . Mean, median, and standard deviation for all independent variables considered are shown in Table S2 . The overall Pearson's correlation coefficient on 1st August 2020, was 0.4940 with an adjusted r-squared of 0.4525, indicating the final model had a moderate correlation with cumulative case rates by county. Lasso and other coefficient shrinkage methods eliminated some variables from analysis due to coefficient shrinkage to zero. Therefore, these standardized coefficients (βz) can be interpreted as they relate to each other within the model but should not be interpreted directly in terms of the dependent variable. Unlike traditional ordinary least squares regression, the coefficients do not directly represent a percent change in cumulative case rates in our model. Lasso analysis was used in our case to choose variables in the face of collinearity, as well as to identify variables that may only have a mild to moderate association. Socioeconomic predictors in the final model included teen birth rate, children in poverty, children qualifying for free lunch, child mortality rate, and percentage of uninsured adults. The strongest indicators were those involving children, with the highest coefficient for the percent of children living in poverty (βz = 0.125). Additionally, children qualifying for free lunch (βz = 0.115), and child mortality rate (βz = 0.11) had a stronger positive association with increasing cumulative case rates relative to other variables in the final model. Lesser contributing variables were uninsured adults (βz = 0.078), and teen birth rate (βz = 0.035). Percent of non-Hispanic Whites (βz = -0.174) and percent of those with long commutes who drive alone (βz = −0.183) had the strongest standardized coefficients and were inversely related to cumulative case rates. In addition to minority status, other demographic indicators included were the percent of residents under 18 (βz = 0.034), percent of female residents (βz = −0.067), percent of residents not fluent in English (βz = 0.086) and Black/White segregation index (βz = 0.088). Other variables included were percent with annual influenza vaccine (βz = −0.062) and percent of those who self-report poor or fair health (βz = 0.09). Our sensitivity analysis shown in Table 2 indicates how variables chosen by lasso analysis changed over time at monthly intervals, beginning 1 April 2020. Standardized coefficient (βz) estimates were included for each variable in the table. However, due to the mechanics of lasso analysis mentioned above, these should not be compared between models or in direct relation to cumulative case rates. We report them to show their negative or positive association with cumulative case rates, as well as to allow comparison within the models of the contribution of a variable to a model at a specific time point. Table S3 includes t statistics and p-values for each coefficient presented below. A strengthening association of predictive variables with the outcome and a generally increasing number of chosen variables over time were observed. The adjusted r-squared on 1 April was 0.0930, with only one variable being predictive of cumulative case rates. By 1 August 2020, twelve variables were included in the model with an adjusted r-squared of 0.4540. On 1 April 2020, race was not predictive of higher cumulative case rates. However, by 1 May 2020, continuing until the final model on 1 August 2020, higher numbers of minorities were consistently predictive of counties with increased cumulative case rates. This variable was the most consistent variable included and was chosen in all models after 1 April. Some variables included in earlier time points were considered indicators of urban versus rural spread, such as higher levels of air pollution (PM 2.5 ) and violent crime rates. With time, more indicators of socioeconomic status, such as low birthweight and lack of insurance, entered the model. The overall increase in adjusted r-squared and the number of socioeconomic variables predictive of increased case rates show that with the spread of COVID-19 over time in the state, the social determinants of health became increasingly predictive of higher cumulative case rates in the counties. Sequential lasso regression analysis showed an increasing trend of the predictive value of the social determinants of health on COVID−19 cumulative case rates. In our final analysis on 1 August 2020, our model using county-level demographics, health, access, and socioeconomic measures accounted for 45.4% of the variation in cumulative case rates by county. Additionally, we observed the number of variables included in the model by lasso regression, as well as model strength, increased as the pandemic progressed. Our findings contribute to a growing body of literature that highlights the need to improve our understanding of the complex interconnectivity between demographics, socioeconomics, and structural inequities as they pertain to infectious diseases. Our study was also consistent with the finding of nationwide community-level disparities in COVID−19 infections and deaths in large US metropolitan areas [30] . Health disparities among race and ethnic divisions are not unique or specific to COVID−19, having been observed in a variety of infectious and chronic diseases [14, 31, 32] . Instead of proactively protecting those known to be the most vulnerable in society, the gaps in health disparities continue to widen during this crisis. These findings correspond with others that indicated minority groups were overrepresented in low-wage jobs considered essential, such as transportation and grocery store workers [33] . Additionally, another study found fewer than one in five Black Americans have job flexibility to work from home compared to more than a third of White and Asian American workers [34] . Thus, racial differences seen in our study and others may be related to a variety of reasons, including a varying ability to social distance and differences in access and quality of care [35] , as well as differences in perceived susceptibility to infectious diseases [36] . Our research supports these explanations, as the inverse association between non-Hispanic Whites and cumulative case rates was the most consistent variable included over time and had one of the strongest coefficients in our final model, alongside a variety of other demographic and socioeconomic indicators. The negative association of percent of residents with long, solo commutes in the model was first seen in the analysis on 1 July and was the most influential variable in the final analysis on 1st August. In general, this variable is considered indicative of poor health and chronic diseases, such as obesity, diabetes, hypertension, and cardiopulmonary disease [37] [38] [39] . We suspect this addition represented residents of suburban communities, who may be telecommuting during the crisis. Wealthy suburban residents may be more likely to have occupations that allow working from home, as opposed to urban residents, and may be additionally advantaged due to low crowding and a higher possibility of social distancing. This association between the ability to work from home and socioeconomic status has been previously reported [40] . Research elsewhere supports such variation in COVID-19 cases by geolocation [41] . Although it was a small contributor in terms of its impact to the final model (βz = 0.034), the addition of percent of people under 18 years old as a variable in the August 1 model is worth discussion. This variable was not seen previously in our sensitivity analysis. Its addition may be an aberration, but in light of concern over younger individuals becoming infected and spreading the infection [42] , this association could also become stronger with time. There is a known increased risk of morbidity and mortality for older adults and seeming resistance to severe disease outcomes by young adults and children who may nonetheless be spreading the virus [43] . The increased availability of testing may play a part in the inclusion of this variable, as children, teens, and their young parents may now test at a higher rate even if they do not present with severe symptoms. Additionally, as an ecologic study, the inclusion of this variable could indicate an infection of adults or parents with children under the age of 18 in the household rather than the children themselves. Further research and monitoring are needed as children return to school. Our study has several important limitations that should be taken into account. First, most of the county-level variables used as independent variables were measured by a variety of organizations across a two-to three-year time span for a purpose beyond COVID-19. Second, the implementation of new state policies for the mitigation of COVID-19, including stay-at-home orders, social distancing, and mask ordinances, may have impacts not measured through this cross-sectional research. Furthermore, our unit of analysis was the county. Therefore, aggregation bias should be considered as the relationships observed on the county-level may not hold up on the individual level. Our methodological rigor in the selection of covariates for the final model through lasso regression may also be a limitation as opposed to selecting the independent variables based purely on theoretical reasoning. Lasso regression analysis has been shown to over-select regression coefficients, which is a concern and drawback for this method. However, it still was shown to be superior to ordinary least squares techniques in similar situations [44, 45] . Since COVID-19 is caused by a novel coronavirus, we believe validating traditional epidemiological techniques using computer learning models, such as lasso, can add support to previous findings related to race and have the additional ability of identifying variables that contribute small or moderate effects to COVID-19 infection rates. Additional research is needed to further explore the complicated relationship between COVID-19 pathogenesis, environmental factors, demographics, and socioeconomics with regard to the social determinants of health. We hope the use of the lasso in this study serves as another methodology that can be used to investigate other outcomes of COVID-19 and their relationship to the social determinants of health, such as cause-specific mortality and hospitalization rates. Due to surveillance gaps in this rapidly spreading disease, there have been challenges in collecting and obtaining individual-level information that can help address the concerns with an ecologic study. Combining individual-level data with neighborhood effects through the use of multilevel modeling could provide a clearer picture of factors related to COVID-19 diagnoses and mortality. Finally, since this is an ecologic, cross-sectional examination of COVID-19 in the state of Georgia, causal inference should not be extrapolated from these findings. However, our final model and sensitivity analysis provide a great starting point for future longitudinal research. The consistency of our findings with the disparities and inequalities observed across the country in morbidity and mortality rates suggest many structural-level issues are contributing to the spread of COVID-19 [46] . This research examined the community-level impact of factors from both a health and economic perspective on county-level COVID-19 case rates in the state of Georgia. Because health, demographic, and socioeconomic factors overlap in very complex ways, the full scale and intricacy of these inter-linkages are difficult to ascertain. However, we believe the strategic use of computer learning techniques, such as lasso, can elucidate some of these complexities. In the absence of consistent data collection on the demographics of positive cases, group-level studies such as ours help to identify influential predictors. Given the knowledge that the social determinants of health have significant effects on acute and chronic disease burden within a population, these findings support the linkage between fragile health, economic indicators, and demographics as key predictors of infection rates. Until longstanding inequities are eliminated and systemic injustices are addressed, the health and wellbeing of vulnerable and minority populations across Georgia will continue to be disproportionately affected, leaving marginalized communities to shoulder the largest burden of COVID-19.
which were concentrated approximately 100-to 200-fold. Serial 2-fold dilutions of BCV were prepared in 0.05 ml of veronal buffered saline containing 0.1% bovine serum albumin and 0.001% gelatin and mixed with 0.05 ml of 0.8 and 0.4% suspensions of mouse and pooled adult chicken erythrocytes, respectively, in the same buffer. The mixtures were then incubated for 1 h at either 4 or 37 °C and the HA titers were determined. The plates incubated at 4 °C were moved to 37 °C for 2 h to measure inactivation of receptors reflected by the disaggregation of the BCV-erythrocyte complexes mediated by the receptor destroying enzyme (RDE) activity [22] . Antigenic comparisons of BCV strains were done by indirect IF, virus neutralization (VN) and HA inhibition (HI) tests. The indirect IF tests were performed as previously described [8] . Virus titration and VN tests were conducted with HRT-18 cells in microplates as previously described [25] . Virus infectivity titers were expressed as median tissue culture infective doses (TCIDs0)/ml. The VN antibody titers were expressed as the reciprocal of the highest serum dilution that completely inhibited cytopathic effects (CPE). The antigenic relatedness (R) between the strains was calculated using the formula [1, 8] : in which rl is heterologous titer (strain 2)/homologous titer (strain 1), and r2 is heterologous titer (strain 1)/homologous titer (strain 2). The HI test was done using standard techniques with mouse erythrocytes [ 19] and sera treated with kaolin and mouse erythrocytes. The antibody titers were expressed as the reciprocal of the highest serum dilutions producing complete HI. Cytopathic effects were evident in HRT-18 cells inoculated with each of the strains of BCV. Cytopathic effects were characterized by enlarged, rounded, and densely granular cells that occurred in clusters at 2 to 3 postinoculation days [2] , and no differences were observed in CPE among these strains. Syncytia were also clearly observed in HRT-18 cells at 2 days after inoculation with these strains following staining with fluorescein isothiocyanate-conjugated anti-BCV serum. Infectivity titers of BCV reached 10 7° to 10 8-7 TCIDsjml at the 5th to 10th passages on HRT-18 cells. The HA and RDE titers of purified BCV strains are summarized in Table 1 . All strains agglutinated mouse erythrocytes and no differences were observed in HA titers against mouse erythrocytes at 4 ° and 37 °C. All strains also agglutinated chicken erythrocytes at 4 °C, but the HA titers varied among the BCV strains. This diversity was reflected in variations of the ratios of HA titer with mouse erythrocytes to HA titer with chicken erythrocytes (M/C HA titer ratio) at 4 °C. However there was no relation between M/C HA titer ratio and the clinical source (CD or WD) of the strains. At 37 °C, the Mebus and DB2 strains of CD BCV and the DBA and SD strains of WD BCV agglutinated chicken erythrocytes with the same HA titers as at 4 °C. However, the other strains of BCV did not agglutinate chicken erythrocytes at 37 °C, and showed RDE activity against chicken erythrocytes. Receptor-destroying enzyme activity with mouse erythrocytes was not UExpressed as the reciprocal of the highest dilution of virus causing complete disappearance of HA patterns at 4 °C after 2 h incubation at 37 °C ° CD Calf diarrhea, WD winter dysentery of adult cattle dM/C Ratio of HA titer with mouse erythrocytes to HA titer with chicken erythrocytes detected for any strain of BCV. According to these results, BCV strains were classified into 3 groups. The first group (CD isolates, Mebus, DB2; and WD isolates, DBA, SD) showed low M/C HA titer ratios(<_16), no differences in HA titers against chicken erythrocytes at 4 and 37 °C and no RDE activity against chicken erythrocytes. The second group (CD isolate, 216XF; and WD isolates, CN, BE, AW)showed low M/C HA titer ratios (<32), no HA against chicken erythrocytes at 37 °C and RDE activity with chicken erythrocytes. The third group (CD isolates, OHC, SDC, JAZ; and WD isolates, TS, BM, BW) showed high M/C HA titer ratios (>256), no HA against chicken erythrocytes at 37 °C and RDE activity with chicken erythrocytes. These variations in HA and RDE activities were unrelated to the clinical source of the isolates (CD or WD). In indirect IF tests, all of the antisera reacted to each virus strain with high titer (102 400 to 409 600), and each antiserum showed no significant differences in reactivity with the homologous and heterologous strains (not greater than twofold differences). The results of VN tests are shown in Table 2 . All of the antisera neutralized the heterologous strains, showing that the strains were closely related antigenically. However, antisera to the Mebus CD, and SD and BM WD strains showed 16-fold or lower VN antibody titers against the DBA, TS, BE and BW WD strains and the 216XF, JAZ, OHC and SDC CD strains than against the homologous strains. These differences were reflected in the R% values: the Mebus, SD and BM strains generated R % values of 13 to 25 against the DBA, TS, BE and 216XF strains. The HI antibody titers are shown in Table 3 . All of the strains showed crossreactivity, but differences in antibody titers were observed. The DB2 strain of CD BCV and the SD strain ofWD BCV were closely related in the HI tests, and antisera to these strains distinguished most other strains with 16-fold or greater differences in the HI antibody titers. Bovine coronavirus causes neonatal CD [13] and is also associated with WD of adult cattle [17] . Based on epidemiological data, these disease syndromes often occur as separate and distinct outbreaks in herds [4, 12, 17] ; hence antigenic or biological differences between CD and WD BCV might be expected. Calf diarrhea BCV isolates belong to a single serotype [4, 15, 25] , but minor antigenic and biological variations among them have been revealed in limited studies [6, 8, 11, 14, 23] . In this study, we compared the antigenic and biological diversity of a variety of WD and CD BCV isolates. Storz et al. [23] reported that variations in the ratios of HA titers with mouse erythrocytes to those with chicken erythrocytes (M/C HA titer ratio in this report) ~Expressed as the reciprocal of the highest dilution of serum inhibiting HA activity. Homologous titers are in bold. Titers which differed by 16-tbld or greater with homologous titers are underlined bCD Calf diarrhea, WD winter dysentery of adult cattle were observed among CD BCV strains. The L9 strain of CD BCV at the high cell culture passage level (the 78th passage) showed a low MtC HA titer ratio (8) whereas the wild type CD BCV strains at low cell culture passage levels (the 3rd to 8th passages) showed high M/C HA titer ratios (128 to 256). In this study, the high cell culture-passaged Mebus strain of CD BCV (the 77th passage) showed a low M/C HA titer ratio (2), but some low cell culture-passaged CD strains (DB2, 216XF) and WD strains (CN, DBA, SD, BE, AW) of BCV (the 5th to 10th passages) also showed low M/C ratios (4 to 32). The differences in HA titers against chicken erythrocytes at 4 ° and 37 °C showed good agreement with the RDE titers for chicken erythrocytes. This suggests that comparison of HA titers obtained at 4 ° and 37 °C may provide an alternative method for evaluting RDE activity. On the basis of HA and RDE patterns, BCV strains were classified into 3 groups. However, each group contained both CD and WD BCV and no relationship between each group and the clinical (CD or WD) or geographic origin of the strains was observed. All strains of both CD and WD BCV examined in this study were related antigenically. Specially, each antiserum showed no significant difference between the homologous and heterologous strains in indirect IF antibody titers. However, some antigenic diversity among BCV strains was observed by VN and HI tests. In our previous report [2] , hyperimmune antiserum prepared in a gnotobiotic calf to the Mebus strain of CD BCV had an 8-to 32-fold lower VN antibody titer against the DBA strain ofWD BCV than against the homologous strain. In this study, guinea pig hyperimmune antiserum to the Mebus strain showed similar VN antibody titer differences between homologous and the DBA strains. In addition, this serum also distinguished three other strains ofWD BCV and four strains ofCD BCV from the homologous virus by 16-to 64-fold differences. Moreover, hyperimmune antisera to the SD and BM strains ofWD BCV also distinguished the same strains which were distinguished by the anti-Mebus serum, form homologous strains by 16-to 64-fold differences in antibody titers in VN tests. Although these variations were recognized only in one-way reactions and all strains examined could be classified into a single serotype, the strains showing these variations might be further divided into 2 subtypes. The Mebus, SD and BM strains which belong to the same potential subtype could be distinguished from the DBA, TS, BE and 216XF strains, which constitute another possible subtype (R% values of 13 to 25). The BW, JAZ, OHC and SDC strains also appeared to belong to the latter subtype. The DB2, CN and AW strains comprised an intermediate group that cross-reacted with both subtypes. Interestingly, antiserum to the Mebus strain of BCV which had been prepared in guinea pig in Japan showed only a 2-fold lower VN antibody titer against the 216XF strain than against the homologous strain [25] . In the present study, antiserum to the Mebus strain prepared in the U.S.A. had 16-fold VN antibody titer differences against the homologous and 216XF strains. The reason for this discrepancy is unknown, but differences in the passage level of the Mebus strain at the preparation of antiserum might affect the virus antigenicity [10] . Alternatively, contamination of cultures with other BCV might occur after import and propagation in Japan. Bovine coronavirus strains examined in this study showed minor antigenic and biological variations, but this diversity was unrelated to the geographic origin or affected animal age groups (WD and CD) from which these strains were recovered. Based on preliminary data, gnotobiotic and colostrum-deprived calves inoculated orally and nasally with WD BCV shed BCV rectally and nasally and developed diarrhea, which was indistinguishable from disease symptoms in calves inoculated with CD BCV [9] . Also, a cow inoculated via a duodenal cannula with CD BCV developed diarrhea and shed BCV (H. Tsunemitsu et at. 1994' unpublished data). These results suggest that the differences in these disease syndromes (WD and CD) are not related to viral factors, but to host and environmental factors; e.g. the immunological status of animals, environmental temperatures, secondary or coinfections with other pathogens, etc. [3, 12, 17] . Further studies are in progress to compare the antigenicity of BCV strains using monoclonal antibodies and in vivo cross-protection tests.
According to the Johns Hopkins Coronavirus Resource Center 1 as of October 2020, global cases of COVID-19 pandemic exceeded 44 million people while death toll exceeded 1.7 million people. Many countries have issued stay-at-home or lock down orders to control the spread of the virus. The vast majority of schools around the world have been cancelled since March 2020 and up to this date the school system is struggling on how to start the new academic year of 2020/2021. Many school systems have chosen a virtual learning model while a few have chosen a hybrid system of in-person and virtual learning while enforcing social distancing and mask wearing mandates. Education and school officials try to make hard decisions to balance the health and safety of students, their families, their teachers, and the quality/efficacy of faceto-face education. On the positive front, diagnostic tests for COVID-19 are now readily available. PCR-based tests can diagnose active patients. Meanwhile, Serology tests are blood-based tests that can detect COVID-19 antibodies. Test times can range from minutes to hours to even days. In this paper we study an Active Surveillance Model for proactively detecting COVID-19 infections among school students. Our Active Surveillance model involves random daily testing for a percentage of students for early detection and quarantine of sick students. We study the impact of Active Surveillance on infection rates and health of our students as well as the economic impacts of reducing quarantine and hospitalization rates. There have been a few publications on simulating the spread of the COVID-19 among populations, the most famous of which is the Washington Post article 2 involving the simulation of COVID-19's exponential spread and several suggested ways to flatten the curve using social distancing and mask wearing practices. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted November 3, 2020. ; https://doi.org/10.1101/2020.10.28.20221416 doi: medRxiv preprint We based our simulation model on the Coronavirus Simulation Matlab program written by Joshua Gafford 3 , which is a recreation of the Washington Post COVID-19 simulation article listed above. The Matlab code simulates COVID-19 transmission among a human population in a confined space. A simple multibody physics model, elastic collision between two equal-mass particles, is applied to determine people's trajectory within the space. We modified the "Stimulus" Matlab code to represent a typical school environment where students and teachers interact together on a daily basis. Our school population was 500 people and the simulation's assessed a 60-day duration to represent a school quarter. Some critical simulation parameters include: the probability of carriers initially infected (fixed at 1%), the probability of disease transmission (99% for the normal behavior without any mitigation practices, such as social distancing and mask wearing, 30% or 60% for mitigation practices), mortality rate (fixed at 0.69% for the student young age group), average recovery period for sick students (14 days.) We assumed the worst case of asymptomatic disease pattern where sick students would still show up to school and possibly infect others. The simulation tracks four metrics during the 60-day period: percentage of unaffected students, who did not catch the virus, percentage of infected students, percentage of recovered students from infection, and percentage of potentially deceased students based on the mortality rate. The three main scenarios for simulations are: 1) normal behavior with no mitigation practices, 2) mitigation practices inside the school including social distancing and mask wearing, and 3) active surveillance procedures where a percentage of students are randomly tested on a daily basis to detect and quarantine infected students before they infect others. The combination of Active Surveillance and either normal behavior or mitigation practices is also modeled. The following set of figures will demonstrate some of the simulation results under different scenarios. Figure 1 demonstrates the baseline scenario of "Normal Behavior" without any mitigation practices (no mask wearing and no social distancing.) The left side shows a simplified simulated school environment where students, and teachers, move and interact daily. The color-coded dots represent students with different states (green: unaffected, red: infected, blue: recovered from infection, and black: deceased from infection.) The graph on the right side demonstrates the progression of the four metrics (unaffected%, infected%, Recovered%, and Deceased%) during the 60-day period. For the baseline normal behavior simulation case, the majority of the students (97.6%) will get infected by the end of the 60-day period as demonstrated by the 2.4% unaffected rate on Day 60. However, by the end of the period, 96.8% of students would recover after 14-day quarantine on average, while 0.8% of students could unfortunately pass away. Some of the infected students could potentially need hospitalization. We can conclude from this simulation case that it will be extremely difficult to return schools to normal practice if there are no mitigation behaviors implemented. Infected%, Recovered%, and Deceased%. Infection rate is 97.6% (2.4% unaffected rate) Figure 2 shows the simulation case of applying mitigation practices among students, such as mask wearing and social distancing. In this case we assumed the mitigation practices will decrease the disease transmission rate from 99% to 30%. We assume the mitigation practices will be enforced by school administrations to achieve such a high efficacy for decreasing the transmission rate. In this case the percentage of unaffected students drastically increased from the previous baseline case to 87.6% with only 12.4% infection rate) among the student population. This case proves the efficiency of the mitigation practices if strictly adhered to, however the challenge is how to enforce it among students especially at younger ages in elementary grades for example. To show how relaxing these mitigation practices can affect the outcome, we changed the disease transmission rate from 30% to 60% to simulate partial adherence to face masking and social distancing rules. Figure 3 shows the result of this case and the unaffected % has drastically decreased from 87.6% to 13.6% (86.4% infection rate) . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted November 3, 2020. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted November 3, 2020. ; https://doi.org/10.1101/2020.10.28.20221416 doi: medRxiv preprint To study the impact of Active Surveillance for COVID-19 testing, we designed a model to test a randomly selected percentage of students daily. For simplicity we assumed 100% test accuracy and we assumed immediate test results available to detect infected students before they can interact with their peers. Figure 4 demonstrates an Active Surveillance case of testing only 5% of the student population (25 tests/day) along with applying social distancing and mask wearing practices, which we assumed similar to Figure 3 of 60% disease transmission probability. The infection rate has drastically improved from 86.4% (13.6% unaffected rate) in Figure 3 to 12.8% infection rate (87.2% unaffected rate) in Figure 4 . This result clearly demonstrates the efficacy of the Active Surveillance methodology even with a reasonable test rate (5% of students) To test the sensitivity of the Active Surveillance approach, we lowered the daily test rate to only 1% of students (5 students out of 500.) Figure 5 shows the result of this scenario and it shows the infection rate increased to 74.4% (25.6% unaffected rate), which implies the non-efficient result if the Active Surveillance rate is too small. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted November 3, 2020. ; https://doi.org/10.1101/2020.10.28.20221416 doi: medRxiv preprint The previous simulations demonstrated the health benefits of applying Active Surveillance measures by reducing the infection rates among students. However, the economic impact of Active Surveillance is also an important factor when it comes to the financial burden of testing. The main question we try to answer in this section is, will the Active Surveillance methodology improve or disprove the financial burden of the COVID-19 pandemic? Incorporating all the financial factors related to COVID-19 is a complex issue. In our study we incorporated three main cost metrics: 1) The cost of the daily Active Surveillance test, 2) The cost of potential hospitalization of some infected students, 3) The cost of lost income of one or two parents who may have to take a leave at work to care for their sick student. We tried to make some reasonable assumptions for the different cost metrics as follows. We assumed an average test cost of $50, a hospitalization rate of 5% among the infected population, an average hospitalization cost of $40K, and two weeks of lost income of $2,430, assuming that the 2019 U.S. median household income was $63K. We ran our simulations while sweeping the Active Surveillance percentage from 1% to 100% with non-uniform intervals. In each run we average 20 cycles with the same settings and calculate the average infection and unaffected rate for students. We also calculate the daily test cost, daily potential hospitalization/income loss cost, daily total cost of testing, hospitalization, and income loss. Our goal is to find the optimal Active Surveillance test rate that can maximize the health benefits (lower infection rates) while minimizing average cost associated with COVID-19. Figure 6 demonstrates how the unaffected rate (and thus the infection rate) and the average daily cost change as the Active Surveillance rate changes. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted November 3, 2020. ; Figure 6 . Active Surveillance % vs. Unaffected Student % vs. Average Daily Cost for the case of mitigation procedures with 60% disease transmission rate. As the Active Surveillance test rate increases the unaffected rate increases (infection rate decreases) while the daily cost decreases due to minimizing hospitalization and income loss costs. After a certain Active Surveillance rate, around 6-10%, the infection rate becomes stable with no improvement of increased test percentages, while the daily average cost increases monotonically as more tests are being conducted daily. It is evident from the graph that the optimal test percentage could be anywhere between 6-10% of students getting tested every day to achieve the optimal minimal infection rate (≤10%)) and minimal average cost as well. To evaluate the efficacy of the Active Surveillance testing alone without any mitigation procedures, we ran our simulations with no mitigation actions, no social distancing and no mask wearing where infection transmission rate is 99%. Figure 7 shows the results of normal behavior while sweeping the Active Surveillance test rate. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted November 3, 2020. ; https://doi.org/10.1101/2020.10.28.20221416 doi: medRxiv preprint The unaffected rate graph and cost graph follow the same pattern as with the mitigation procedures but with pushing the optimal testing rate to 8-10% that can achieve the lowest infection rates (≤10%) and lowest average cost. This case may be useful to the elementary school student populations where enforcing the mask wearing and social distancing protocols may cause challenges. In this work we investigated a simulation model for a school environment to better understand the impacts of social distancing, mask-wearing, and Active Surveillance testing on the infection rates among students. It is evident that the mitigation measures and the Active Surveillance testing are both effective in limiting the spread of the COVID-19 infection among students. The most effective practice is to employ both mitigation and Active Surveillance testing together. Under the given assumptions, Active Surveillance with a reasonable test percentage (6%-10% with social distancing and mask wearing, or 8-10% without mitigation procedures) can achieve both health and economic goals of safe schools with lower economic burdens. No compromise needs to be taken to optimize both the health of our students and teachers and the economic burden of COVID-19. For future work, we plan to incorporate more realistic assumptions about the speed of the test results, different test techniques and pricing, pool testing techniques (testing multiple samples with one test kit), student grouping (classroom divisions), etc. We will also investigate the sensitivity of the results with varying different parameters in our study, such as varying the cost of testing, test sensitivity and specificity, etc. We hope this work helps inform and inspire our education and government leaders to consider Active Surveillance testing as a venue to reopen our schools safely and efficiently. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted November 3, 2020. ; https://doi.org/10.1101/2020.10.28.20221416 doi: medRxiv preprint
In March 2020, a 50-year-old woman presented in our Plastic Surgery Clinic, with a 3 × 2.5 cm oral cavity cancer. A CT scan with contrast was performed and confirmed the presence of a lesion involving the floor of the mouth, the symphysis of the mandible and the left neck lymph nodes. It is well known that timely start of surgical treatment is associated with higher survival rates in patients with head and neck cancer (Graboyes et al., 2019) . Based on these premises, Plastic and ENT surgery units agreed to operate the patient on April 1. Before the operation, the patient performed all requested pre-operative screening tests. Testing for COVID-19 was negative. In the OR, during the intubation and extubation, all health staff wore personal protective equipment (PPE). We performed a wide tumor excision, marginal mandibulectomy, bilateral functional lymph node dissection and soft tissue reconstruction with an ALT free flap. In the post-operative period the patient spent the first night in a dedicated intensive care area, she was monitored with Licox PtO 2 system (Arnez, Ramella, Papa, Novati, et al., 2019) . After she was transferred to Plastic surgery unit, where she remained isolated for a further 7 days, until her second Covid-19 testing was found negative. The post-operative period was uneventful and without any complication. In April 2020, an 80-year-old man was admitted to our ER, with a Gustilo IIIA open tibial fracture after fall from high. The patient had metastatic urothelial cancer and was positive for COVID-19. Our orthoplastic protocol (Arnez, Ramella, Papa, Galici, et al., 2019) consists of the first debridement, temporary bone fixation and coverage within 24 hr of trauma, followed by the definitive bone fixation and soft tissue coverage within 7 days after trauma. In this case, the first surgery was technically demanding due to the use of the PPE, impossibility to wear magnification loops and difficult communication between the members of the surgical team. For these reasons and for the patient's comorbidities, we decided on conservative non-microsurgical treatment (debridement, external fixation and direct closure of the wound) to simplify the postoperative management of the Covid-19 positive patient. Microsurgical free flap reconstructions are demanding surgeries that require long OR times and often intensive post-operative care in ICU units. In this period of constrained resources, and necessity of wearing additional protective equipment, microsurgical reconstruction must be performed but only after careful case-by-case assessment because, even during Covid-19 pandemia it is imperative for then health systems to guarantee the patients the best treatment. In borderline cases, when possible, in view of many additional difficulties, we suggest a more conservative approach. The authors declare no conflict of interest and no financial disclosure.
The current coronavirus disease 2019 (Covid-19) pandemic, caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is expanding globally becoming a so serious public health emergency that drastic measures across all continents, including nationwide lockdowns and border closures have been necessary to slow down the spreading of the disease. Pregnant women and their fetuses represent a high-risk population during infectious disease outbreaks and they are a challenge in terms of health care [1] . Spain has been one of the most affected countries by the pandemic, with more than 243,000 positive cases and 27,000 deaths at the end of May [2] . Castilla y León has been the third with the highest number of cases (19,104 infections and 1928 deaths on June 11th) among the 17 autonomous communities of Spain. Of the nine provinces that make up the Community of Castilla y León, Salamanca is the second both in number of infections (4280) and deaths (369, 19% of the whole Community) [3] . Salamanca University Hospital is one of the largest hospitals in the Community, with 2000 deliveries per year on average. From the beginning of the pandemic, testing with nasopharyngeal swabs and a quantitative polymerase-chainreaction (qPCR) was performed in the Emergency Department on patients with Covid-19 disease symptoms and an outpatient program was implemented to trace positive patient's contacts. As evidence on the existence of a large number of asymptomatic positive patients who could act as carriers was reported [4, 5] , the Obstetrics Department asked the qPCR test to be performed on all patients admitted both for delivery and to the obstetrics ward, to guarantee admission in a safe environment and to reduce the chance of transmission to other patients and to the healthcare staff. From 23/03, the qPCR test was carried out to detect SARS-CoV-2 in all women admitted for delivery. Since then there have been 366 deliveries up to 11/06. Of these, 25 patients (15%) have tested positive for the virus. Twelve of them were detected by qPCR on admission for delivery being all of them asymptomatic. It should be noted that two of these patients were referred from a private centre where serological screening with IgM-positive antibodies was carried out. On admission, both women tested negative for qPCR; however, considering the chance of being either Covid-19 early stage or qPCR false negative, they were treated as if they were positive [6] . qPCR test was repeated a week later becoming positive in one of them. The reasons for admission in these patients were all obstetric: six were admitted in labour, two presented premature rupture of membranes at term (PROM), two had preterm premature rupture of membranes (PPROM), one postterm pregnancy was admitted for induction and one term pregnancy presented severe pre-eclampsia. Another nine patients were detected through the outpatient contact tracing program. Six of them were asymptomatic whilst three had mild symptoms. One of the patients in this group who was positive at 31 weeks gestation and whose qPCR was negative at 33 weeks was admitted with an intrauterine fetal death and required an emergent caesarean section due to severe preeclampsia and disseminated intravascular coagulation. Four patients were tested by the hospital Emergency Department being all symptomatic: two of them had mild symptoms and were discharged under home isolation and contact tracing recommendations. The other two presented severe pneumonia being admitted to the Internal Medicine Ward. One of them developed a threatened preterm labor resistant to tocolysis, which ended in preterm delivery at 32 weeks gestation. The other patient suffered a severe worsening and required admission to Intensive Care Unit for a week (Fig. 1) . In our experience, 72% of the positive patients were asymptomatic (n = 18) and 20% presented mild symptoms that allowed outpatient control, without hospital admission (n = 5). Only 8% (n = 2) required admission for severe symptoms, mainly pneumonia. These data are important, as without a prior screening test we could be facing a positive patient unaware, thus increasing the risk of transmission and therefore the spread of infection [7] . From the beginning of the crisis, admission protocols in the Obstetric Department were adapted to avoid the separation of the positive mother from her child and to guarantee, on the one hand, the isolation from the rest of the women in the ward, preventing the spread of infection to the negative patients, and on the other hand, the necessary intimacy and comfort for both mother and new-born. The implementation of qPCR testing in a universal basis has been a key factor on this point. None of the new-borns developed Covid-19 symptoms or required admission for this reason. Spain has also been one of the countries with the highest number of infected healthcare staff to date and where the shortage of protective equipment has been particularly noticeable during the most serious moments of the crisis. In our department, almost 20% of the medical staff has been infected during the pandemic (5/26). Although it is not possible to specify, it is very likely that most of these infections had occurred during the time prior to the declaration of the state of alert and the routine performance of screening tests, with patients being attended without adequate protective measures [8] . This is particularly important not only because of the loss of health care capacity for the population as the effective number of available health care workers is reduced, but also because, if these infected workers are themselves asymptomatic, they can act as carriers of the disease and spread it further [4, 9] . For all the above reasons, universal Covid-19 screening in pregnant women prior to admission for delivery is beneficial not only for the women themselves, who will get more appropriate care, but also to reduce the risk of exposure of the healthcare staff, to trace the Covid-19 status in her contacts and relatives and to implement measures to slow down the spread of the pandemic. Author contributions CAM conceived the idea and wrote the manuscript. VYA is the principal investigator of the study. LAMV, GFJ, GMV, DMJ and SJM were directly involved in the conception of this study as well as in editing the manuscript. All authors approved the final version of the manuscript. Funding This work has been partially supported by grant from the Gerencia Regional de Salud de Castilla y León, Valladolid, Spain (GRS2045/A/19).
The analytical category of gender implies a social construction around the sexual anatomical difference. It highlights the interrelation of two relevant dimensions: the human body and the way humans experience their bodily dimension. Bodies and their processes cannot be detached from their historical and social contexts. Through this analytical category we are able to see the multiple dimensions of a crisis such as the current one. At a time when humanity faces unprecedented risks in relation to its surroundings, and to events that challenge the health of the human body and the wellbeing of human life across the world, we cannot but emphasize the social construction of our realities. A threat to our individual health is also a threat to our collective wellbeing. The reflection regarding our social dynamics demand far more than mere health measures. Individual and bodily health is to be conceived in its social dimension. We speak of public health when life conditions of entire groups are so intertwined that there is no way to think of health measures based on individuals alone. Such measures need to be shaped in accordance to the dynamics of the collective body. In fact, the overarching impact of COVID-19 on every aspect of our lives compel us to even look beyond the single concept of public health. In The Lancet, Richard Horton suggests that COVID-19 is not a pandemic but a syndemic, reflecting the effect of 'health and social interactions that are important for prognosis, treatment, and health policy' (Horton 2020) . The interaction has an effect on each of the components: 'Syndemics are characterized by biological and social interactions between conditions and states, interactions that increase a person's susceptibility to harm or worsen their health outcomes'. We have seen worldwide multiple inequalities strongly linked to the comorbidities associated with the harshest impact of SARS-CoV-2 among the groups at greater risk of complications and death. The populations suffering from the most extreme inequalities are also those having the least access to the preventive and mitigation measures, so that the negative social interactions go both ways. The result is that extreme inequalities are further exacerbated once the virus hits the most disenfranchised communities. Being confronted with a syndemic means that humankind will not be able to address the real challenges posed by the virus unless and until the structural causes of inequalities are addressed. From such viewpoint, health measures alone will not be enough. Nor will be the short-term economic response aimed to mitigate the consequences of productive activity's temporary suspension. At least two decades ago, the alternative Latin American tradition of Disaster Risk Management had warned about the structural origins of socio-natural phenomena leading to disaster risk. The deep roots of such events were to be mainly found in the failure of the predatory and extractive economic system, in the imbalance on power relations and in the lack of medium/long-term policies to address both drivers. Disasters are the result of multi-causal events, and the materialization of un-managed risks. In the stages of handling and recovering from a disaster, both ex ante and ex post, the only possible approach to look towards the future is tackling the root causes of risk related to the economic system while promoting a more democratic weaving of the social bond. Socio-natural disasters, such as pandemics, have been appropriately analyzed under these lenses (Castro and Reyes 2006) . Experts from this tradition even refused to refer to 'natural' disasters, preferring to call them 'social' disasters, thereby placing the emphasis on those who suffer the impacts rather than on the threat. The reason of this semantic shift was practical: when a disaster is 'natural', the risk is shifting away from collective responsibility. Recognizing the 'social' dimension, instead, implies that the focus is centered on the societal and governmental responsibility for risk prevention and effective action in the face of an emergency. In view of the needed re-construction with long term objectives to mitigate/eliminate the risk of future recurrence of the same disaster, human rights and gender equality should be prioritized by governments at all moments. Throughout decades, feminist economists of the Global South have advocated for the need of a paradigm shift of the economic model, placing the wellbeing of people and the planet at the centre rather than prioritizing economic and financial flows alongside the exchange of goods. The encompassing feminist economic model embraces the human rights framework, ensures gender equality and environmental integrity while promoting democratic processes at the different levels of governance. However, life has been seriously threatened by this capitalistic and neoliberal system. In this article, life should be understood as we know it, not only human life, but the healthy life of the Planet, the survival of biodiversity and the integrity of ecosystems. No time has ever shown the damage of our anthropocentric view to the extent defining this era. Also, it is crucial to single out its negative pervasive effects produced on lives in the Global South, on the daily existence of those who have most suffered the global neoliberal white supremacist division of labour. The devastating results are just as well evident, like a fractal, 1 in racialized communities and in communities whose identities were anthropogenically conceived in a notion of alterity by this system for the very purpose of their exclusion (LGBTI, disability, migrants, and others). COVID-19 is powerfully revealing the way in which the human body is subjected to body politics: In the name of 'saving lives' we have witnessed enormous abuses to social life. Under the cover of their responses to the pandemic, many governments have turned to undemocratic processes that have further harshened the balance of power, while endangering the safety of environmentalists and human rights defenders. 2 In many countries there are new open threats to democracy and civil rights, in the form of attacks against activists promoting social and racial justice-like the violence faced by Black Lives Matter and anti-fascist activists in the USA. Feminist activists calling for sexual and reproductive health and rights, or protesting against the increase of gender-based violence, are the targets of violence. In reaction to the many democratic violations around the world, feminist have demanded that lockdowns and confinement measures be carried out by upholding a democratic framework. 3 As for the lockdowns themselves, alarming trends around the world signal a gross regression in the exercise of women's human rights. While all of these challenges are dramatically exacerbated in contexts of conflict, body politics are also fully at play when public discourses emphasize the priority of ensuring the safety and health of people, especially 1 Fractals are complex geometric structures with patterns of infinite and irregular iteration in self-similarity. Since 1975, fractal geometry is crucial notion for science. Fractals have also been applied to social science in the past decade to reflect both structural and multiple dimensions of human life, as well as the singularity of every event. https ://www.brita nnica .com/scien ce/fract al Accessed 26 October; For more on the use of fractals in social science and philosophy Grössing (1993) . 2 For an account on the impact of COVID 19 lockdowns on environmentalists and human rights defenders, https ://www.busin ess-human right s.org/en/big-issue s/covid -19-coron aviru s-outbr eak/covid -19human -right s-defen ders-and-civic -freed oms/; https ://www.front lined efend ers.org/en/campa ign/covid -19-attac ks-hrds-time-pande mic Last seen on October Accessed 15 October. 3 In particular, feminist activists have called for 'zero tolerance for restrictions and regulations not proportionate and effective in dealing with the pandemic, that shrink human and democratic rights and personal liberties, establish or consolidate authoritarian regimes, and that are enforced militaristically'. Feminist Response, Principles: COVID-19 responses must be based on and strengthen democratic values: https ://www.femin istco vidre spons e.com/princ iples / Accessed 15 October 2020. when this is done by short-cutting governments' mandate to ensure sexual and reproductive rights, i.e. by slow or lacking response to address the disruption of essential sexual and reproductive health commodities, including menstrual health items or contraceptives. It also occurs when conservative forces push for evidently regressive pathways in global resolutions on COVID-19 negotiations, as was the case of the UN's Omnibus resolution on Meanwhile, gender-based violence in domestic and public settings has increased at an alarming rate, a problem documented across the world. 5 Well before COVID-19, it was known that because of the sexual division of labour women were subsidizing the global economy by undertaking most of the unpaid domestic and care work (existing estimates point out to women producing two-thirds of the value that circulates in the world). 6 In the recurrent scenario of austerity measures and reduction of social spending, women become shock absorbers; economists even call this an effect of the 'great elasticity' of women's time in relation to work, meaning that women undertake the tasks externalized by the state, such as caring of sick people, the elderly and others (Pearson and Sweetman 2011). Now, the lockdowns have highlighted that the sexual division of labour remains as present and structural as ever, with women having to face the social expectations of them undertaking the domestic and care work for the rest of their family and community members. In this sense, when we say that body politics have been at play during the COVID-19 crisis, we mean that states have naturalized the sexual division of labour, through provision of different destinies to women and men. The widespread use of 'women's empowerment' as a so-called solution reflects precisely the resistance to promote the structural transformation that would be required to ensure women's human rights and eradicate the sexual division of labour. The COVID-19 pandemic has reversed women's human rights decades back, precisely because of the implicit expectation of women undertaking all the burden of paid and unpaid domestic and care work, while having to endure very precarious and downgraded work conditions in their other formal and informal jobs. Women remain at the bottom of public concern, despite the enormous negative impact of multiple discrimination, when it is articulated to conditions related to race, age, geographic condition, and others. 7 Despite the social gains women have made throughout modern history, in their painstaking and incessant transiting towards the public sphere, the many COVID-19 related lockdowns around the world have proven that governments' role in ensuring the sexual division of labour proves to be 'efficient' for the circulation of capital and goods. Despite for calls to guarantee women's human rights, governments and International Financial Institutions (IFIs) have not only been blind to this structural dimension, but they actually keep promoting austerity measures as part of the 'recovery' from the recession caused by COVID-19. It has been widely proven that women are more disproportionately impacted than men by austerity measures and fiscal consolidation policies, while these also exacerbate discrimination and inequalities. 8 4 The omnibus resolution titled, Comprehensive and coordinated response to the coronavirus disease (COVID-19) pandemic, adopted on 11 September 2020, faced harsh times before it being adopted. The Operative paragraph 7 on sexual and reproductive health and rights underwent a specific vote, due to the conservative push against this agenda. https ://undoc s.org/A/74/L.92 Accessed 17 October 2020. 5 Joint statement by the Special Rapporteur and the EDVAW Platform of women's rights mechanisms on Covid-19 and the increase in violence and discrimination against women, Geneva, 14 July 2020, https ://www.ohchr .org/EN/NewsE vents /Pages /Displ ayNew s.aspx? NewsI D?=?26083 amphe rsand LangI D?=?E. Accessed 15 October 2020. 6 Figures of the value generated by unpaid domestic and care work around the world still depend on studies of costing paired along data obtained by time surveys and satellite accounts. In general, women are estimated to perform around 70% of all of the unpaid domestic and care work globally (yet, multidimensional discrimination also plays a great role here, i.e. indigenous and rural women undertaking more hours and tasks than urban women). Studies based on costing in terms of GDP-the assumption being that this type of work would receive a minimum wage-show that women labour would amount to around one-fifth of the entire GDP in a country. But estimates of direct remuneration for different activities paid in the market show that the value generated could amount to more than the money currently circulating in the world. In other words, the estimation includes monetized and non-monetized value. This would mean there is not enough money to pay for the value generated by unpaid domestic and care work. The estimation greatly depends on the methodology. For more on the macro-economic dimension of care, Antonopoulos, Rania, 'The unpaid care work -paid work connection', ILO, 2009, https ://www.ilo.org/wcmsp 5/group s/publi c/---dgrep orts/---integ ratio n/docum ents/publi catio n/wcms_11914 2.pdf Accessed 26 October 2020. 7 'Women's empowerment', in this sense, has operated as a very damaging and regressive term to minimize the systemic dimension of inequalities by relying on the individual or community level, rather than on the imbalance of power and structures at stake playing against women's human rights. The responsibility to guarantee human rights falls in the shoulders of State as duty bearers. Under no circumstance women should be expected to solve their condition of structural inequalities on their own. 8 'Impact of economic reforms and austerity measures on women's human rights', Report of the Independent Expert on the effects of foreign debt and other related international financial obligations of States on the full enjoyment of all human rights, particularly economic, social and cultural rights, Juan Pablo Bohoslavsky, A/73/179, From the macro-economic point of view regarding gender, it is clear then that the decision to promote austerity measures relies on exploiting women and discriminated groups of population instead of addressing the structural elements that are really at the root cause of the multiple crises humanity is facing nowadays. UNCTAD's Trade and Development Report 2020 9 warns of a 'lost decade' if countries adopt austerity, recommending instead 'tackling a series of pre-existing conditions that were threatening the health of the global economy even before the pandemic hit' such as '(…) hyper-inequality, unsustainable levels of debt, weak investment, wage stagnation in the developed world and insufficient formal sector jobs in the developing world'. 10 We will come back to these issues later on in this article. Globally, the United Nations Secretary General, António Guterres, said women are also seeing 'a threat to their incomes: about 60 percent of women around the world work in the informal economy, hence at greater risk of falling into poverty'. 11 In parallel, for those women who have not been able to remain in confinement, either because they are at the forefront of the response 12 or because of their situation of poverty or geographic conditions, lack of access to basic services such as water and sanitation or housing constitute structural barriers to preserving their wellbeing. There has been a blatant Governments' omission in ensuring their basic needs and rights. The current COVID-19 crisis is also confronting us with new challenges; these may remain invisible to the wider perception, yet they are very real to those whose lives have been touched by the SARS-CoV-2 virus. As Corina Rodríguez noticed, 13 so far this unprecedented event has been a major health crisis with human bodies being hidden to our eyes, as if those battling the disease should be erased out of our view. On the one hand, people who have fallen ill with COVID-19 are being imposed isolation. 14 On the other, if they are well enough (meaning: 'not with risk of death'), they should go through the illness without resourcing to the public health sector. The Mexican government, for instance, has insisted that only extremely sick people (with troubled breathing, mental confusion, concurrent high fever) should go to the hospital. Exception made of people with comorbidities, elderly persons or pregnant women, for anyone else feeling symptomatic, the regular practice during the pandemic has been to remain at home in isolation, taking medications that may treat the main symptoms, in order to avoid hospital saturation. Due to a fatal mix of factors, COVID-19 related deaths rose fast and exponentially in Mexico. The high level of comorbidities in the Mexican population-linked to extreme poverty but also to the massive consumption of highly processed food and beverages-is at the core of the high morbidity, coupled with the fact that 60% of the population works in informal sector. Like in many other countries, decades of depletion of the country's health sector have weakened state capacity to handle the quick progression of the disease. 15 The prescription of isolation and individualizing the impact of the illness without bearing in mind the social bond has also many implications: in many countries, suffering from COVID-19 is equivalent to being subjected to stigma and discrimination. For those convalescing at home the Footnote 8 (continued) 18 July 2018. https ://www.undoc s.org/A/73/179 Accessed 26 October 2020. 9 Trade and Development Report 2020. From Global Pandemic to Prosperity for All: Avoiding Another Lost Decade, UNCTAD, 2020. https ://uncta d.org/syste m/files /offic ial-docum ent/tdr20 20_en.pdf Accessed 17 October 2020. 10 COVID-19: UNCTAD warns of 'lost decade' if countries adopt austerity, UNCTAD, 21 September 2020. https ://uncta d.org/news/ covid -19-uncta d-warns -lost-decad e-if-count ries-adopt -auste rity Accessed 17 October 2020. 11 COVID-19 worsening gender inequality, more women have lost jobs -UN, Daily Post, 10 April 2020. https ://daily post. ng/2020/04/10/covid -19-worse ning-gende r-inequ ality -more-women -have-lost-jobs-un/Accessed 15 October 2020. 12 There are many articles documenting by countries and regions the way women have been at the forefront of the response. This relates to the way in which the sexual division of labor assigns for women roles pertaining to nursing, cleaning, caring for those most in need in communities, and others. 13 Corina Rodríguez, in the Webinar, Economías Pandémicas y cuidados: Pensando alternativas transformadoras desde la emergencia, organized by Confluencia Feminista rumbo al Foro Social Mundial de Economías Transformadoras, 21 May 2020. https ://www.faceb ook. com/watch /?v=26484 79815 62340 . Accessed 17 October 2020. 14 People sick with COVID-19 are told to remain in isolation, away from others, even in their own home, to avoid contagion. People who possibly were in contact with someone sick with COVID-19 are told to quarantine, away from others, for 14 days. https ://www.cdc.gov/ coron aviru s/2019-ncov/downl oads/COVID -19-Quara ntine -vs-Isola tion.pdf Accessed October 26 2020. 15 By the time this article was written, Mexico is almost reaching 90,000 deaths. Mexico is an interesting country from the point of view of disaster risk management: due to a specific Civil Protection strategy devised since the 1985 earthquake, Mexico consolidated the deployment of mobile hospitals with the help of the armed forces. Even at the peak of contagion and deaths, Mexican hospitals were not saturated to the extent of other countries with the same figures -or even less-as Mexico. Many things can be said on the matter from the point of view of 'securitizing' health, and the impacts of a military deployed in a territory affected by a pandemic, including specific problems for women, such as an increase of sexual harassment and unwanted pregnancies (Castro and Reyes 2006) . range of social resentment can take a variety of forms, from accusations of exposing their neighborhoods or villages to the risk of the virus, to violent attacks expelling them from their communities. 16 Hospitals and clinics are micro-cosmos of their own, but not devoid of the conflictive perceptions on the pandemic in the outside world. The phenomena of stigma have expanded to healthcare providers: a wide literature of country reports have documented discrimination against healthcare providers, adding another layer to the risk they face in combating COVID-19. 17 As for patients hospitalized, they are rigorously isolated from their families. Thousands of people have died alone, often after days or even weeks without real human care. Their beloved ones in solitude also, unable to cope with the distancing first and then the departure. The early months of the crisis brought about a tragic conversation about the 'value of life', often because underfunded health sectors were at their full capacity. The medical triage, supposedly a pragmatic solution to weigh-in in decisions when faced with scarcity of medical infrastructures, ended up reflecting an extremely discriminatory society. Those deemed least likely to survive-people with disability or comorbidities-raise compelling questions about our unequal societies, or ageist and essentialist conceptions. Through the prism of human rights, these protocols do not stand a more complex revision. For the months and years ahead, when it comes to public policies, medical triage cannot replace proper planning and budgeting, or differentiated protocols addressing the diverse groups of the population's needs. In the logic of body politics, the extent of what is to be prioritized and what not, is extreme. Corina Rodríguez points out the strange phenomenon of a pandemic with an extremely high number of deaths but no dead bodies. States have disposed of them following strict health emergency guidelines, but also depriving mourners from the variety of processes that our human societies have devised to symbolically cope with death. So far, there has not been a collective and global call from governments to address the symbolic dimension of the extent of loss humanity is facing right now. Rodríguez points out that in history's past pandemics (in the last one called 'the Spanish flu' in 1918, but even more in previous centuries), people dying would collapse on the streets and be piled up due to lack of proper infrastructures, corpses would then be seen out in the open. Such experiences brought their own dimensions of horror but also the threatening reality of the pandemic. With tragic exceptions, the current massive disposal of corpses outside the public view can be one of the complex reasons why many people are still in disbelief and denial of the pandemic. 18 From a different angle, the management of anonymous dead bodies, deprived of their proper symbolic recognition, brings about an immense void: a deep wound, not quite traceable in any specific part of our collective body. And yet, the tragedy of the exceptionality stares back at us. Ecuador went through weeks of anguishing scenes of corpses lying on the streets in Guayaquil, with funerary services at capacity, and the fear of not having the proper health protocols to dispose of them. 19 New York City faced a scandal for some days about the accumulation of decomposed corpses piled up in trucks, struggling with delivering proper burials. 20 Authorities of Manaus in Brazil rushed to dig mass graves. 21 A disaster risk management strategy with a gender perspective recognizes the need for a dignified disposal of the bodies, notwithstanding the stressing conditions the emergency brings about (Castro and Reyes 2006) . In these unprecedented times, what are the implications of socially distanced public ceremonies, of funerals or symbolic acts of collective mourning being treated as exceptions rather than the rule? Corina Rodríguez explains how feminists need to focus on human death the same way that we have focused on the wellbeing of life. Governments feel the urge and the comfort in communicating numbers: the 'figures' of the infected, of those who have recovered, of the dead. Numbers that overpass the million death toll by now. Yet, the wounds include the psychological trauma, the effects of governments' omissions in dealing with collective grief. Mexico, a country with a deep attachment to death rituals and ceremonies to honor its dead and ancestors, is already facing a backlash due to the government 's decision to cancel the festivities of the Day of the Death on November 1st and 2nd, and not only because of the economic losses. 22 Several cities and states in the Mexican Republic have cancelled the Day of the Death festivities, including Mixquic, one of the most traditional villages celebrating these ceremonies in the entire country -with an income loss brought about by the one million missing attendants, both from international and national tourism. 23 The festivity is deeply embedded in traditional beliefs which date back to pre-Hispanic times: the celebration is considered a deep commitment with ancestors. With the cancelling of the festivities, the Mexican government is reminiscent of a modern Creon. It will not be enough to stop the ancient memorial of the dead by the people of Mexico, embodying an ethical Antigone paying respects to the spirit of the beloved ones in their death ritual, prioritizing the family bonding and duty above the normative logic of the polis. 24 The interests played around within the State, and, even more, around the global players, define what is regulated and what is not, what is of relevance and what is not. Capital versus life. This struggle in the end defines not only the wellbeing but the quality of life for some people above others. The more we highlight the macro dimension of gender, the more we realize that gender is not only a national agenda, but a global one, related to all the macro challenges humanity is facing nowadays. Feminist groups around the world have reacted not only to provide analysis of the current global challenges, but also to provide solutions. One of the first global collectives around COVID-19 was the group around the Feminist Response. 25 It would be difficult to pay homage to the effervescent activity of feminists in their countless analyses, proposals and actions throughout the COVID-19 era, but the collective Feminist Response is trying to map the harvest of diverse feminists groups and is a good entry point to recognize their tireless work around the globe, despite the shrinking space for meaningful participation at all levels. Governmental processes and international negotiations have turned to videocalls, eroding most of the conditions that can ensure transparency, accountability and meaningful participation. Globally, women and feminist organizations have also seen their funding dangerously been reduced, despite the crucial role they undertake in their communities. What is States' response in the face of citizens' challenges and movements' demands? Many high-sounding declarations have been made at national level, but hardly any measures have properly addressed the impact of global challenges on women, and the macro-economic routes to gender equality. What we have seen instead is the predominant trend to promote short-term solutions rather than systemic ones. In the multilateral field we hear calls for 'Building Back Better'. Which means reinforcing the same measures that led us to where we are. Some have said COVID-19 marks the end of an era, but it is the opposite: the trends indicate we are even deepening the problem and sending millions of people to extreme poverty. In the words of Barbara Adams: 'Yet again people around the world were witnesses to the enormous gap between the well-articulated diagnosis of where we are and what needs to be done not only in the face of COVID-19 but also of pre-existing inequalities, vulnerabilities and multi-dimensional violence'. 26 The diagnosis links to an urgent call from the feminist movement in bringing back to the main scene the public sector. At the national level, to strengthen the policy space to promote much needed reforms in several fields: from expanding the room for domestic resource mobilization (i.e. by moratory of debt payments; promoting progressive 22 Se inconforman vecinos por el cierre de panteones en Muertos, La Jornada, 14 October 2020. https ://www.jorna da.com.mx/ultim as/capit al/2020/10/14/se-incon forma n-vecin os-por-el-cierr e-depante ones-en-muert os-7761.html?fbcli d?=?IwAR2 aHCtc amRE_ n55Qp RC6Y9 aWtqR iOubM drK3Z wNabg GFjZW 1WC3q dCwR-A. Accessed 15 October 2020. 23 Mixquic se queda sin festejo de Día de Muertos por primera vez en su historia, UNOTV.Com, 6 October 2020. https ://www.unotv .com/ repor tajes -espec iales /dia-de-muert os-2020-cance lan-celeb racio n-enmixqu ic-por-covid /Accessed 15 October 2020. 24 Tending to the psychological needs of population during the response and after and emergency has proven to be crucial for posterior economic and societal recovery (Castro and Reyes 2006) . 25 The Feminist Response to COVID 19 includes '(…) organizations and activists, working across global movements centered on human rights, sustainable development, and economic and social justicewe have come together in a moment of collective organizing to outline key principles for a just and resilient recovery from the ongoing global pandemic, as well as to track responses and uplift collective action of feminists around the world'. The group's face to the world is through the website https://www.feministcovidresponse.com/, around a series of principles to respond to COVID, sharing resources, feminist reactions and analysis, webinars and online video materials produced by feminist around the world, as well as a tracker to evaluate different COVID responses around the world. Accessed 17 October 2020. Footnote 25 (continued) 26 Adams continues: 'Could it be, she asked, that the UN has been 'captured' as the President of Equatorial Guinea lamented: 'We cannot accept [either] that after so many years, the Charter of the UN continues to preserve the primacy of the major powers who trample on the legitimate aspirations of the weak so that they can enjoy the advantages of the UN system.' 'Thalif Deen, UN Survives a World Turned Upside Down', IPS News Agency, 16 October 2020. http://www.ipsne ws.net/2020/10/un-survi ves-world -turne d-upsid e/ Accessed17 October 2020. taxation) to ensuring better spending through Gender Responsive Budgeting, devising differentiated policies to eradicate inequality gaps and redistribute wealth while ensuring wellbeing for people and the Planet; as well as permanent support for the population via universal social protection floors (the only measure which can really target the myriad of challenges people face in their daily life). However, at national level there is little capacity to fully undertake these recommendations. Major social budgetary cuts are now taking place in developing countries, in sectors already wounded by decades of austerity measures. Countries barely have the policy space to react in a proper manner. Those willing to do it are over-indebted, or facing the threat of the activation of Investor-State Dispute Settlements (ISDS) clauses when their responses to COVID-19 in defense of public goods clash with private interests. Such is the case of Peru, Guatemala, Bolivia and others. 27 Just as evidently, in order to carry out those measures at the national level to the needed extent, a major structural reform of the international financial, trade and economic arenas is required at the global level. The Civil Society Group on Financing for Development and the Women's Working Group on Financing for Development (WWG on FfD) launched an open call for Global Economic Solutions Now! and are promoting a Campaign of Campaigns to raise awareness and rally support for global economic demands holding at the core a decolonial and feminist vision, promoting the wellbeing of people and the Planet while ensuring human rights and democratic global governance. 28 This proposal is based on the following overarching principles: • Human rights, gender equality, wellbeing, social-economic and environmental justice. • Socio-economic transformation and a just, equitable transition for people and the planet. • Democratization of global economic governance and inclusive participation at all levels. The demands are: (1) A New Global Economic Architecture that works for the People and Planet, under the auspices of the UN: an International Economic Reconstruction and Systemic Reform Summit. (2) Debt cancellation, SDRs issuance and Sovereign Debt Workout Mechanism at the UN. (3) Establishing a UN Tax Convention for redistributive justice, eliminating regressive taxation and illicit financial flows. (4) Creation of a Global technology assessment mechanism at the UN. (5) Fully Assess development impacts of current trade and investment framework. (6) Assess systemic risks posed by unregulated or inadequately regulated financial sector instruments and actors. (7) Review development outcomes of PPPs and 'private finance first' approach. (8) Review of the ODA framework. The efforts in building larger alliances between different movements in the social, economic and environmental justice fields are already starting to present common fronts in relation to each demand. The WWG on FfD 29 has been working throughout this time to raise awareness and promote an open space to make the connection between the macro demands of this campaign and the gender dimension in an intersecting manner, human rights and environmental integrity. 30 The feminist movement is now looking at solutions at the intersection of solidarity between and within social movements, public policy, local and community resistance, as well as challenging the premise of building back better, refusing to go back to a world in which women are subsidizing even more entire economies going under recession. The element that distinguishes the COVID-19 era is that social movements, including the feminist movement, are awaking to the urgent need of collective action and alliances at all levels. There was always an understanding of the need for these alliances, and during major pivotal moments of past decades movements have come together to work jointly. Now the feminist analysis is also at the core of the solutions, and the feminist action is more needed than ever. The time to act with ambition is now, and the time to shatter structures could not be at the reach of 29 An alliance of women's organizations and networks which advocates for the advancement of women's human rights and gender equality in the Financing for Development related UN processes. 30 For the gender dimension of these macro-economic demands, Macro Solutions for Women, the People and the Planet, https://www. equidad.org.mx/Noticias/2020/09/29/macro-solutions-for-womenthe-people-and-the-planet-womens-working-group-on-financingfor-development-key-messages-and-inputs/Accessed 17 October 2020. For an in-depth conversation on these agendas, see the webinar series Macro Solutions for Women, the People and the Planet: https :// www.youtu be.com/chann el/UCPJH wi7LJ NwpI6 6egmX PpAg?view_ as?=?subsc riber . Accessed 17 October 2020. 27 More ISDS cases launched against Latin American states amid the COVID-19 pandemic, AFTINET, 1 September 2020. http://aftin et.org.au/cms/node/1918 Accessed 17 October 2020. 28 Time for a UN Economic Reconstruction and Systemic Reform Summit. Towards a New Global Economic Architecture that works for the People and Planet, https ://csofo rffd.org/globa l-econo mic-solut ions-now/Accessed 17 October 2020.
The last few decades saw continuing outbreaks of human infections involving many pathogenic viruses from practically all major families of viruses. Some of these viruses are known and endemic while others are novel. Some of the known viruses are emerging again to cause more outbreaks and in places not previously known for outbreaks to occur. Many of these viruses cause severe human disease and may affect many different organ systems. The most devastating epidemic in the recent past is of course due to the human immunodeficiency virus, which causes infection of many organ systems including the central nervous system (CNS). The influenza viruses (H5N1, H1N1, etc.) pose a serious health threat to millions of people by causing severe respiratory syndromes [1, 3] . Viral encephalitis due to known arboviruses, such as Japanese encephalitis (JE) virus and tick-borne encephalitis (TBE) virus continue to cause widespread infections and deaths in endemic countries. In the Indian subcontinent, thousands of JE virus infections, still occur despite the availability of vaccines [26] . In the Far East, Russia and eastern Europe, TBE recurs regularly to cause deaths [51] . Because TBE is occurring in increasing numbers and in previously unaffected areas, it is now considered to be emerging. West Nile virus (WN), another known arbovirus usually found in Africa, Europe and Asia, recently emerged for the first time to cause severe disease in humans and animals in North America in 1999 [9, 44, 48] . In a short span of a decade [48] , it has spread to the whole of North America to be the most important emerging epidemic encephalitis. The virus has caused neuroinvasive disease in more than 11,000 people with a mortality of 9%. The enteroviruses are still responsible for outbreaks of CNS infections despite the worldwide retreat of poliovirus. The most important of these is probably Enterovirus 71 because of the very high mortality in patients who develop encephalomyelitis, although fortunately this complication is rare [40, 81] . Nonetheless, it is emerging in many previously unaffected areas, particularly in Asia, where typically, children develop CNS disease in a background of epidemic hand, foot and mouth disease [90] . Another important group of viruses that has recently emerged to cause epidemic encephalitis are the henipaviruses. This group of viruses from the genus Henipavirus (family Paramyxoviridae) comprises the Hendra (HeV) virus and Nipah virus (NiV). Like other paramyxoviruses, henipaviruses are enveloped, negative-strand RNA viruses. Because of the novel nature of this new genus, the high mortality and the unique pathogenesis of the disease, this review will focus specially on henipavirus infection as previous reviews has focused mainly on NiV alone [81, 86] . The pathology and pathogenesis shall be highlighted and compared with other viral encephalitides. For other important epidemic viral encephalitides, there are existing good reviews [26, 32, 51] . HeV was first isolated after an outbreak in horses and two humans in the town of Hendra, Australia in 1994 [56, 72] . Following this, there were more outbreaks in horses and humans and to date a total of six cases with three fatalities have been reported [35, 60, 64] . All the human cases have had close contact with infected horses which are now thought to be the intermediate hosts for HeV transmission [52, 80] . HeV cases have not been reported outside Australia. The first NiV outbreak started in northern Malaysia in 1998 [7] , involving about 27 patients with 15 fatalities [8] . The outbreak which started in pig farms then spread to other farms in the south following transfer of infected pigs [54] . This area became the second and most severely affected epicenter [7] , and the virus was named after Kampung Sungai Nipah (Nipah river village) located here. A prevalence of 265 cases of acute NiV encephalitis with 105 fatalities in Malaysia has been reported [62] , but if asymptomatic or mildly symptomatic cases are also included, the total number is probably more than 350 cases [86] . Infected pigs exported to Singapore spread the infection to some abattoir workers [7, 63] . After 1999, in Malaysia and Singapore, there are no reports of new NiV outbreaks until 2001 onwards when several recurrent outbreaks of NiV in Bangladesh and India [36, 38, 49] have involved more than 120 people. The natural host of henipaviruses has been confirmed to be the fruit bat (Pteropus species), and bat-to-human transmission may be direct or indirect via intermediate hosts [13, 34] . In HeV transmission, the horse is the intermediate host [64, 72] . As virus could be detected in oronasal secretions and urine from infected horses, contact with these secretions appears to be the most likely route of transmission [21, 80] . Although person-to-person HeV transmission has not been reported, involvement of the lung and kidney in acute infection and the presence of virus in nasopharyngeal secretions strongly suggest this possibility [64, 85] . The mode of bat-to-horse transmission remains unclear, but may be due to the ingestion of feed or pasture contaminated by bat-derived foetal tissues or urine [80] . Outside Australia, HeV has not been isolated from bats. In the first known NiV outbreak in Malaysia and Singapore, the intermediate host is the pig as it is clear that direct contact with infected pigs or fresh pig products was responsible for viral transmission. In Malaysia, a higher prevalence of infection was found among pig farmers, abattoir workers, pork sellers and army personnel involved in culling of pigs [2, 10, 62, 65, 69] . Widespread surveillance of pig populations and culling of sick animals stopped the Malaysian epidemic [8] , while the banning of pig imports and abattoir closure stopped the outbreak in Singapore [8, 10] . It was suggested that bat-to-pig transmission could result from pigs ingesting half-eaten, contaminated fruits dropped by bats near farms [13] . Person-to-person transmission is very rare in Malaysia. In a large serological survey of health staff, serum neutralisation tests were all negative [55] . However, a nurse who had previously cared for a NiV-infected patient, seroconverted but remained asymptomatic although the brain MRI showed a few discrete lesions typically seen in acute NiV encephalitis [75, 76] . In contrast to the Malaysian outbreak, the Bangladeshi/ Indian outbreaks showed a high incidence of person-toperson transmission either to health care workers or other people who had contact with patients [31, 36, 39] . Personto-person transmission could be explained by the presence of virus in patients' secretions [14] . Moreover, no animal has been positively identified as intermediate hosts so far, although in some outbreaks, patients were reported to have had contact with sick animals (pig, cow and goat) [49] . Date palm sap, a local delicacy in Bangladesh, has been implicated in some cases [50] . Palm sap drip collected overnight into open pots tied onto palm trees allow foraging fruit bats to feed and contaminate the sap thus transmitting virus to people who drink the raw sap. The incubation period appears to range from a few days to 2 weeks [11, 27, 64, 72] . Milder clinical features include fever, influenza-like illness, headache and drowsiness [11, 27, 35, 72] . Severe HeV infection can manifest either as a neurological or a pulmonary syndrome, but since only very few patients have been involved, it is not well characterised as NiV infection. Neurological signs include confusion, motor deficits and seizures while the pulmonary syndrome comprises an influenza-like illness, hypoxaemia, diffuse alveolar shadowing in chest X-rays [64, 72] . Severe NiV infection is characterised predominantly by an acute febrile encephalitic syndrome. In a cohort of 90 patients with acute NiV encephalitis, the main presenting features were fever, headache, dizziness, vomiting, and reduced level of consciousness [27] . In fact, more than 50% of patients have some degree of reduced consciousness. Clinical signs, such as areflexia, hypotonia, abnormal pupillary response, tachycardia, hypertension, abnormal doll's eye reflex and segmental myoclonus, suggested involvement of the brainstem and upper cervical cord. Segmental myoclonus was characterised by focal, rhythmic jerking of the diaphragm and muscles in the limbs, neck and face. Meningism and generalised tonic-clonic convulsions were also observed. A pulmonary syndrome appears to occur in a minority of patients. In the same cohort only 14% was reported to have unproductive cough [27] . In another Malaysian hospital series, 24% of patients had abnormal findings in the chest X-rays, but none had severe lung disease [11] . In the Singapore series of 11 patients, 3 were clinically thought to have atypical pneumonia with abnormal chest X-rays [63] . Brain MRI scans in acute henipavirus encephalitis show multiple, disseminated, small discrete hyperintense lesions mainly in the cortex, subcortical and deep white matter [47, 64, 70] . In three Bangladeshi patients with apparent acute NiV encephalitis and available brain MRI findings, only one patient showed the same discrete hyperintense lesions while other two patients showed multiple confluent lesions [66] . Specific anti-henipavirus IgM and IgG antibodies that can be detected in the serum and CSF in most patients are critical to diagnosis. More is known about seroconversion after NiV infection than HeV infection. Overall, antibodies are more likely to be positive in serum than CSF. In NiV infection, IgM seroconversion by day 4 is about 65%, and by day 12, 100%. IgM can persist for at least 3 months. There is 100% IgG, seroconversion by day 25 [67] and IgG levels may persist for several years [12] . Using either serology or IHC in autopsy tissues, a positive diagnosis can be made in a majority of NiV cases, and when these diagnostic methods are used in combination, the diagnosis can be confirmed in all cases [87] . Specific neutralising antibodies, IgM or IgG have been reported in HeV infected patients [35, 60, 72] . Mortality in HeV infection is about 50%, while mortality in severe NiV infection is about 40% (Malaysia) to 70% (Bangladesh/India) [36, 38, 62] . In many of the Malaysian patients who recovered, there were no apparent serious sequelae [27] . Neuropsychiatric sequelae have been reported [59] . Fatal intracerebral haemorrhage is a rare complication [27] . It is not known if post-infectious encephalomyelitis occurs following acute henipavirus infection. Henipavirus infection may be complicated by relapsing encephalitis following apparent recovery. One case of relapsing HeV encephalitis and more than 20 cases of relapsing NiV encephalitis (probably \10% of survivors) have been reported thus far [12, 60, 74] . Some cases of relapsing NiV encephalitis only had mild symptoms such as fever and headache during the acute phase, and hence have also been called ''late-onset'' encephalitis. The single case of relapsing HeV encephalitis occurred about 13 months after exposure, while an average of 8 months elapsed before relapsing NiV encephalitis occurred. Clinical and radiological findings suggest that relapsing NiV encephalitis is distinct from acute NiV encephalitis [70, 74] . The brain MRI in relapsing henipavirus encephalitis shows patchy, confluent hyper-intense cortical lesions. The macroscopic features of the HeV-infected brain have not been reported while the features in the NiV-infected brain are non-specific; no discrete lesions could be identified. Our current knowledge of the microscopic pathology of henipavirus infection is based on autopsy tissues: 2 autopsies of HeV infection and more than 30 autopsies of NiV infection [85, 87] . In general, the microscopic features in these two infections appear to be very similar, hence they will be discussed together. One of the most important targets in acute henipavirus infection is the endothelium and smooth muscle of blood vessels. True vasculitis characterized by varying degrees of segmental endothelial ulceration, karyorrhexis, intramural necrosis and inflammatory cells is observed in blood vessels of the brain (Fig. 1a) , lung, kidney, heart and many other major organs. Milder subendothelial inflammation (endothelitis) may also be seen (Fig. 1b) . In some vessels particularly in NiV infection, occasional endothelial multinucleated giant cell (Fig. 1d) can be detected (about 30% of cases). Thrombosis is often associated with vasculitis and some vessels may be completely obliterated by thrombotic plugs (Fig. 1a) . Viral antigens (Fig. 1c) , RNA and nucleocapsids can be detected in endothelium and multinucleated giant cells, and vascular smooth muscle. In NiV infection, there is a suggestion that CNS vascular susceptibility is the highest compared to vessels in other organs. Typically, small vessels, e.g. capillaries, small arteries and venules show evidence of vasculitis, but not the larger vessels. Focal haemorrhages may be observed near vascular lesions. In the CNS, the main pathological findings are vasculitis (with or without thrombosis), parenchymal necrosis and evidence of viral infection in neuroglial cells. Vascular lesions in both grey and white matter are seen throughout the CNS and these are often associated with discrete necrotic or more subtle vacuolar plaque-like lesions. The necrotic plaque-like lesions are characterized by varying degrees of necrosis, neuropil vacuolation/oedema and mild inflammation (Fig. 1e) . In neuronal areas, there may be some neuronal loss as well. In the white matter, welldeveloped plaques consist of eosinophilic, necrotic material similar to axonal spheroids seen in diffuse axonal injury (Fig. 1f) . Inflammatory cells when present comprise neutrophils, macrophages, lymphocytes, plasma cells and reactive microglia. In the HeV-infected molecular layer of the cerebellum, subtle vacuolar plaques are paler staining and consist of small fine vacuoles associated with an increase of CD68-positive macrophages/microglia which may be difficult to detect without IHC. In some cases, focal neuronophagia, microglial nodule formation, clusters of foamy macrophages, perivascular cuffing and meningitis can be found. Some neurons show the rare perineuronal Viral inclusions, antigens, RNA and nucleocapsids are observed mainly in neurons (soma and processes), although the very rare ependymal cell or astrocyte may be involved [28, 41, 87] . Neuronal viral inclusions can be found in the cytoplasm and nuclei, mostly near vasculitic vessels or necrotic plaques. Cytoplasmic inclusions are usually small, discrete, eosinophilic, and sometimes multiple (Fig. 2a, b) . Nuclear inclusions are less commonly found and occupy most of the nucleus. Inclusions were reported in 62% of cases in one series. Viral antigen/RNA positive neurons (Fig. 2c) found at the periphery of necrotic/vacuolar plaques may form concentric or eccentric rings. Some plaquelike, groups of positive neurons are not associated with prominent necrosis, vacuolation or oedema (Fig. 2d) . Necrotic plaques in the white matter are generally not associated with viral antigens/RNA. In acute NiV encephalitis, the level of viral antigens peaks at about 6-10 days, and are largely cleared after approximately 14 days. In a case of resolving acute NiV encephalitis who died several months after the infection, the brain showed randomly distributed, discrete slit-like, oval or spherical lesions in the grey and white matter. Most lesions consist of foamy macrophages, lymphocytes and reactive gliosis (unpublished observations). There was no evidence of vasculitis, but some residual perivascular cuffing remained. In the lung, apart from vasculitis, there is parenchymal inflammation, necrosis, intra-alveolar macrophages/ inflammatory cells, type II pneumocyte proliferation, alveolar membranes and haemorrhage. Occasionally, intraalveolar multinucleated giant cells with nuclear inclusions are noted. In the kidney, the rare focal glomerulitis with or without thrombosis or necrosis, and focal inflammation around necrotic tubules may be observed. The heart and lymph node may also show features of infection. Viral antigens are demonstrable in the lung parenchyma including alveolar type II pneumocytes, intra-alveolar macrophages, and renal glomeruli and tubules. The pathological features of relapsing henipavirus encephalitis are based on an autopsy case of HeV [85] and two autopsies of relapsing NiV encephalitis [74, 87] . Macroscopically, relapsing NiV encephalitis shows varying degrees of confluent softening and necrosis in the cerebral cortex and subcortical areas such as the thalamus and basal ganglia. The microscopic features of relapsing HeV and NiV encephalitis again appear to be similar and the pathology is confined to the CNS. Other non-CNS organs are essentially normal and vasculitis is absent throughout. In affected neuronal areas (cerebral cortex, basal ganglia brainstem, etc.), confluent and extensive parenchymal necrosis, oedema and inflammation are seen with some spill over into adjacent white matter. Inflammatory cells consist of macrophages, lymphocytes and some plasma cells, with prominent perivascular cuffing. In many areas, severe neuronal loss is replaced by reactive glial and prominent vascular proliferation. Although viral inclusions can be found, the most prominent inclusions are found in relapsing NiV encephalitis. Focal viral antigens/RNA and nucleocapsids are demonstrated mainly in surviving neurons (Fig. 2e, f) , ependyma and possibly in other glial or inflammatory cells as well. Neuronophagia and prominent microglial nodules are rarely observed. Severe meningitis is found in many areas. Vasculitis, endothelial syncytia or thrombosis as seen in acute henipavirus encephalitis are absent. Blood vessels are all negative for antigen/RNA. Certain pathological features in henipavirus encephalitis may be distinctive enough to suggest the diagnosis, particularly if this can be confirmed by IHC, ISH, serology, RT-PCR and virus culture. Perhaps the most unique finding in acute henipavirus encephalitis is the multinucleated endothelial cell found in about 30% of NiV cases. This feature has not been described in other viral encephalitides. Extensive vasculitis found in all cases is probably a more useful feature to diagnose acute henipavirus encephalitis. In viral encephalitis, CNS vasculitis is rarely encountered with the exception of varicella-zoster and herpes simplex which may be associated with granulomatous angiitis [23, 71] , a feature not seen in acute henipavirus infection. In the case of varicella zoster, usually larger vessels are implicated, but the type of vascular lesions may be variable [43] . Other non-viral pathogens including rickettsiae, and Neisseria [73] may be associated with vasculitis. In rickettsial encephalitis, vasculitis and necrosis are more subtle and less prominent [79] . Other CNS changes in acute henipavirus encephalitis, such as perivascular cuffing, parenchymal inflammation and neuronophagia, are rather non-specific features and can be found in other viral encephalitides [20] . Vasculitis, a key event in the pathogenesis of acute henipavirus infection, follows from vascular endothelial and smooth muscle cell involvement resulting in thrombosis, vascular occlusion, ischaemia, microinfarction, and probably thromboembolism as well. These vascular lesions contribute to the formation of necrotic/vacuolar plaques that seem to correlate with the multiple discrete lesions seen in brain MRI studies [47, 70] . However, neuronal infection also contributes to plaque formation especially in the cerebral grey matter and brain stem. This dual pathogenetic mechanism in the CNS and other organs appear to be unique to henipaviruses. Necrotic/vacuolar plaques in the white matter are caused mainly by ischaemia/microinfarction since glial cells are far less susceptible to infection. Vasculitic vessels probably cause a breach in the blood-brain barrier to facilitate virus escape into the parenchyma to infect neurons. Inter-neuronal spread further into the periphery of the plaque may contribute to dissemination. In subacute sclerosing panencephalitis (SSPE), it is believed that endothelial infection facilitates measles virus entry into the brain although vasculitis is absent [15] . The neuron is often the main, if not the only target cell, of most neurotropic viruses, including JE virus [16] , TBE virus [24] , West Nile virus [30] , rabies [42] , measles, herpesviruses and enterovirus 71 [83] . Neuronal infection most likely leads to viral cytolysis and cell damage in various critical parts of the CNS leading to an encephalitic syndrome. Hence, significant pathological changes such as viral inclusions and virus localisation/detection by various means are expected to be found in relation to the neuron. Since viral inclusions can be found in many other viral encephalitides, it is not that helpful for the diagnosis of henipavirus encephalitis. Compared to endothelium and neurons, glial and epithelial cells are rarely involved. This may be related to the density of ephrin B2 and B3 recently found to be the receptors for henipavirus although this has not been studied [4, 57, 58] . A recent paper suggests that the blood vessel may also be a target in JE, but this has yet to be confirmed [25] . Macrophages are the main infected cells in subacute HIV encephalitis [22] . IHC is very useful to detect viral antigens to confirm the diagnosis of henipavirus infection and indeed in numerous other viral encephalitides as well. Both polyclonal and monoclonal antibodies to NiV and HeV have been produced and many of them are cross reactive and sensitive enough to detect both infections [53, 77, 87, 89] . Unfortunately, most antibodies suitable for IHC are proprietary and not commercially available. Needless to say, a good knowledge of target cells is needed to assess the IHC assay, and in the case of henipavirus, viral antigen localisation in neurons and blood vessels are useful for diagnosis. If available, ISH which detects viral RNA is also a useful adjunct to tissue diagnosis of henipavirus infection [84] . One of the most interesting complications of acute henipavirus infection is relapsing encephalitis. Fortunately, it is relatively rare and not uniformly fatal. The presence of viral inclusions, nucleocapsids, antigens and RNA confirms relapsing henipavirus encephalitis as a recurrent infection rather than post-infectious encephalitis [74] . It is assumed that if recurrent viruses were from extra-CNS sites, then viraemia (and vasculitis) have to occur to enable virus to enter the CNS similar to acute henipavirus encephalitis. Hence, the absence of vasculitis and extra CNS organ involvement indirectly suggests reactivation of latent viral foci from within the CNS, introduced during the acute infection. Vasculitis-induced thrombosis, ischaemia and microinfarction do not appear to play a role in contrast to acute henipavirus encephalitis. The risk factors for relapsing henipavirus encephalitis are unknown. Clinically, relapsing henipavirus encephalitis does share some similarities with SSPE. However, the latter typically sets in several years after the acute infection and may be more often fatal. Nonetheless, like relapsing henipavirus encephalitis, SSPE is not invariably fatal and recurrences have been reported [12, 17, 74] . Virus genomic mutations reported in SSPE is one possible mechanism for henipaviruses to remain latent and escape the immune response [6] . So far, no viral mutations have been found in relapsing henipavirus encephalitis [82] . Measles virus is well known to cause immune suppression [29] and being paramyxoviruses themselves, henipaviruses may similarly cause immune suppression that might impact on the development of relapsing encephalitis. Further investigations are much needed to unravel the pathogenesis of relapsing henipavirus infection. It is apparent that NiV and HeV, both from the same Henipavirus genus, share many common clinicopathological features suggesting that the pathogenesis of the human diseases, respectively, are essentially the same. The outbreaks of henipavirus infections in Asia and Australia are prime examples of emerging zoonotic infectious diseases that continue to occur worldwide, many of them associated with severe epidemic encephalitis. As far as newly emerging or novel viruses are concerned, one of the most important natural host is the bat. As many as 40 viruses have recently been isolated from bat species in as many years, though not all have been shown to be pathogenic to humans. These include Ebola virus, Australian bat lyssavirus, SARS coronavirus, Menangle virus, Tioman virus, henipaviruses [5, 18, 88] . Because of the worldwide range of bats and their ability to fly to large areas of human habitat, they are very effective for virus dissemination under the right conditions. In the case of henipaviruses, the range of pteropid bats includes Southeast Asia, China, Japan, Oceania, the Indian subcontinent, Australia and Africa, and there is evidence of bat infection in many of these countries [13, 19, 33, 34, 37, 39, 45, 46, 61, 68, 78] . Hence, future henipavirus outbreaks can be expected in these regions. Needless to say, the role of pathologists in understanding the pathology and pathogenesis of emerging viral encephalitides is critical. The relative lack of interest in these infections among pathologists especially in more developed countries is perhaps not surprising, as they are often regarded as tropical or ''third world'' diseases. However, with increasing air travel and ability of viruses to emerge in previously unaffected areas, e.g. West Nile virus in North America, it is quite clear that greater efforts should be made to study these diseases and to improve diagnostic capabilities. Further understanding of the pathology and pathogenesis of emerging epidemic viral encephalitides should continue to contribute significantly to the development of therapeutic strategies and vaccines.
Letter to the Editor A joint infection control system is needed in mental health institutions during outbreaks of major respiratory infectious diseases Faced with the novel coronavirus disease (COVID-19) pandemic, vulnerable populations including the aged, children, pregnant women, and psychiatric patients have been widely concerned. 1e3 To protect psychiatric patients, mental health institutions in China have established measures to prevent and control nosocomial infections. However, the capacity of one institution in the combat against such a pandemic is limited. Effective prevention and treatment of COVID-19 for psychiatric patients demand cooperation between multiple institutions within a region. Here, we take, for example, the cooperative practice of COVID-19 infection control in the regional mental health union in Chengdu, China, to address the initiation of a joint infection control system across mental health institutions during outbreaks of major respiratory infectious diseases. During the COVID-19 outbreak, effective protection for psychiatric patients in Chengdu has been realized based on cooperation between member institutions of the Chengdu mental health union. Firstly, a system of COVID-19 prevention and control measures was established by Chengdu Mental Health Center (CMHC), an Acategory psychiatric hospital in China with the annual outpatient number of more than 350,000 and annual inpatient number of more than 10,000. Based on this strategic system, CMHC has prevented its patients and staff from nosocomial infection and meanwhile provided instructions for member units, including 15 primary-level mental health institutions, 3 comprehensive hospitals with psychiatric inpatient wards, and 395 community health service centers/township hospitals, leading to effective infection control in the union with no suspected or confirmed cases of COVID-19 infection due to hospital transmission. On the other hand, as a hospital designated for suspected and mild cases of COVID-19einfected psychiatric patients in Chengdu, CMHC has cooperated with member units and comprehensive hospitals designated for COVID-19 treatment, covering procedures for consultation, referral, and joint treatment to achieve seamless connectivity and optimal and quickest control of both COVID-19 and psychiatric symptoms. Furthermore, under the guidance of CMHC, primary-level mental health institutions have advised and supported community health service centers/township hospitals on scattered management of home-based psychiatric patients to prevent clustered infection and social instability. While ensuring effective prevention and control of COVID-19 transmission, CMHC and primary-level mental health institutions have provided onsite and online psychological intervention services for preclassified hospitals and populations in different districts of their community to develop a facilitative environment in the fight against the pandemic. The cooperative practice in the Chengdu mental health union during the COVID-19 outbreak has an important implication for regularization of joint infection control for major respiratory infectious diseases across mental health institutions. In a city or an equivalent administrative region, there should be a joint infection control network of three levels of institutions (Fig. 1) . The top level is a large-scale psychiatric hospital, such as CMHC, integrating medical treatment, education, research, and prevention. The second level consists of small-and medium-sized psychiatric hospitals or comprehensive hospitals with psychiatric inpatient wards. The third level includes community or township health service centers. The first level plays the central role in the structure, with its strong specialty and a certain degree of comprehensiveness. It should be a hospital designated for suspected and confirmed cases of infected psychiatric patients in the region. Therefore, an independent department of infectious diseases is necessary. The building layout and facilities of this department meet national requirements, e.g. Requirements of Environmental Control for Hospital Negative Pressure Isolation Ward (GB/T 35428-2017) 4 and Technique Standard for Isolation in Hospital (WS/T 311-2009). 5 Medical staff in the department is mainly composed of specialists in epidemiology and psychiatry and also includes a multidisciplinary team of severe medicine, clinical pharmacist, nutrition, and so on. In nonepidemic periods, it admits hospital or community acquired infection patients with mental disorders. During outbreaks of major respiratory infectious diseases, it immediately functions as specialized area for suspected or infected cases of psychiatric patients. The establishment of such a department in the psychiatric hospital promotes 'one-stop' service for psychiatric infections. The second level is the intermediate hub connecting the top and bottom levels. In non-epidemic periods, institutions at this level admit or transfer patients mainly regarding their psychiatric phases. During epidemic outbreaks, they screen suspected cases in psychiatric patients to keep patients in hospital for observation or transfer them to designated treatment hospitals. These institutions also support and advise the third level on epidemic prevention under the guidance of the top level. Community or township health service centers, the basic level in the network, are responsible for supervising rehabilitation and home management of non-acute psychiatric patients. During epidemic outbreaks, they take care of severe psychiatric patients at home and provide medicine delivery service and online treatment for those living in closed management areas. They conduct preliminary screening for psychiatric patients having a common fever and those having a fever caused by major respiratory infectious Public Health j o u r n a l h o me p a g e : w w w . e l s e v i e r . c o m/ l o ca t e / p u h e diseases and transfer patients with a common fever to the second level and suspected cases of the infectious diseases to the first level. To sum up, infection control practices in the COVID pandemic provides mental health institutions a new insight in effective protection of vulnerable patients. In the future, professional experience can be combined with data modeling and machine learning to initiate a joint infection control system with high effectiveness and responsiveness. This study was supported by the National Natural Science Foundation of China, China (grant numbers 61806042, 31600880) and the Special Research Project for the novel coronavirus pneumonia funded by the Chengdu Science and Technology Bureau, China (grant number 2020-YF05-00171-SN).
N eonatal diarrhoea is the most significant cause of morbidity and mortality in dairy calves less than 6 weeks of age. 1 Key variables that affect host immunity, pathogen exposure and, subsequently, the risk of disease include environmental conditions, herd management and nutrition. Disease reflects the culmination of host-pathogen interactions. 2 Generic management strategies are recommended to reduce the risk of neonatal calf diarrhoea, and pathogen-specific interventions such as vaccination or medication may be recommended when a causal relationship is established for specific pathogens. Establishing causality is confounded by pathogen shedding in apparently healthy calves 2 and by the qualitative, but not quantitative, nature of most diagnostic tests used to identify the presence of enteric pathogens in calves. Rotaviruses and coronaviruses have been identified as the most important viral pathogens involved in the neonatal calf diarrhoea complex. 2, 3 Diagnostic techniques that may be used to detect rotaviruses and coronaviruses in faecal samples include virus isolation and electron microscopy, as well as assays to detect viral antigens (latex agglutination and enzyme-linked immunosorbent assay (ELISA)) and viral nucleic acid (such as polymerase chain reaction (PCR)-based assays). 1, [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] A number of diagnostic tests are used to detect rotaviruses and coronaviruses in calf faecal samples in animal health diagnostic laboratories around Australia. Since diagnostic laboratories moved to full cost-recovery for diseases that are not notifiable, the cost to the producer of diagnostic investigations has increased, leading to a reduction in the use of laboratory assays to support field disease investigations. The development and availability of lateral flow immunochromatography (LAT) dipsticks provides an alternative, affordable, rapid calf-side pathogen detection test for assessment of faecal samples in the field. Interpretation of ELISA and LAT test results is confounded by limited sensitivity and specificity data. Establishing the sensitivity and specificity of these tests is problematic because it would be largely influenced by the detection limits of the test, the disease prevalence in the sampled population and the number of viral particles present in the samples tested, which is dependent on the timing of sampling during the course of disease and the infective dose. The development of real-time reverse transcription PCR assays (qRT-PCR) has improved the diagnostic capabilities of large laboratories for the detection of RNA viruses. These assays are sensitive and quantitative diagnostic tests that allow high sample throughput and screening for multiple pathogens. Further, they require less labour, reduce the likelihood of laboratory contamination and are less expensive than conventional gel-based PCR assays. The objective of this study was to evaluate qRT-PCR assays for the detection of bovine rotaviruses and coronaviruses and to investigate the performance of a commercially available ELISA and LAT assay used in Australia for the detection of rotaviruses and coronaviruses in faecal samples from sick calves. Faecal samples were collected from outbreaks of diarrhoea in dairy and dairy-beef calves under 6 weeks of age. Herds with a minimum of 100 milking cows or rearing a minimum of 15 calves per batch were included in the study. An outbreak of diarrhoea was defined as a minimum of 5% morbidity, with calves exhibiting signs of systemic disease (such as poor appetite, dehydration, decreased mentation and reduced suckle reflex) and pasty to watery faeces. Twelve veterinary practices from the six states of Australia with a large number of dairy herds were instructed on sample selection, sampling technique, storage and transport protocols. Practitioners were advised to collect 6-10 samples from each farm. Approximately 25 mL of faecal material was collected from the rectum of calves by direct digital stimulation using a new latex glove for each calf. Samples were placed in a sterile container and kept refrigerated until shipping. Samples were transported on ice from the veterinary clinics to the Livestock Veterinary Teaching and Research Unit, Camden, using an overnight courier service. Faecal samples were refrigerated on arrival and divided into 2 mL aliquots. One aliquot was stored at 4°C until testing with the commercial ELISA and LAT test kits. For the qRT-PCR assays, 0.1 g of undiluted faeces was mixed with 0.9 mL phosphate-buffered saline (PBS) and stored at 4°C until processed at the Elizabeth Macarthur Agricultural Institute. The remaining 2-mL aliquots of faeces were stored at -70°C for further testing if required. After low-speed clarification (1500 g, 4°C for 10 min) of the 10% suspension of faeces in PBS, 50 mL of the supernatant was used for RNA extraction using a magnetic bead-based system (MagMax 96 Viral RNA, AM 1836 Ambion, Austin, TX, USA) in accordance with the manufacturer's instructions. The magnetic beads were handled and washed and the nucleic acid eluted using a magnetic particle handling system (Kingfisher 96, Thermo, Finland). The nucleic acid was eluted in a final volume of 50 mL and stored frozen at -20°C until tested. Prior to testing by the rotavirus qRT-PCR, the RNA was denatured by heating at 95°C for 5 min. Coronavirus. The genome of bovine coronavirus was detected by a qRT-PCR assay that uses a fluorogenic minor groove binding (MGB) probe. Sequence data (Genbank reference FJ 938066) from an Australian strain of coronavirus obtained from a neonatal calf was used to design primers and a probe using Primer Express software version 3 (Applied Biosystems, Foster City, CA, USA). The nucleic acid sequences targeting a segment of open reading frame 1ab (ORF1ab) encoding the polyprotein are as follows: The assay used 20 mL of a commercial qRT-PCR mastermix (AgPath-IDTM One-Step RT-PCR kit, AM1005; Ambion) to which was added 5 mL of extracted RNA. The assay was run on an ABI 7500 Fast thermocycler (Applied Biosystems) for 45 cycles under the cycling conditions recommended by the mastermix manufacturer (reverse transcription at 45°C for 10 min; reverse transcription inactivation/ initial denaturation at 95°C for 10 min; amplification for 45 cycles at 95°C for 15 s and 60°C for 45 s). Each assay plate included two negative controls (one negative sample and a no-template control) and two positive controls. Results were analysed using a fixed manual threshold (0.05) and expressed as cycle-threshold (Ct) values. Ct values >40.00 were considered to be negative. The positive controls, derived from a dilution of known positive samples, gave Ct values of approximately 29.00 and 32.00. During our validation studies, a similar assay was published. 15 The two assays were compared on a collection of 258 of the samples included in this study (data not shown) and were shown to have identical diagnostic sensitivity and specificity, although the analytical sensitivity of the published assay was sometimes slightly lower. The published assay was shown to have an analytical sensitivity of approximately 20 RNA copies/mL and a linear range from 10 1 to 10 9 copies. Our assay was shown to have similar linearity and an analytical sensitivity of approximately 5-10 RNA copies/mL. On the basis of this comparison, our assay was selected as the preferred method for the current study. Rotavirus. RNA samples were denatured by heating at 95°C for 5 min and tested for groups A and C rotavirus genomes using a modification of the assays described by Logan et al. 16 Our assays used the same volumes, mastermix, cycling conditions and thermocycler as described for the coronavirus qRT-PCR, but used the primers and probes described by Logan et al. 16 Two negative and two positive controls (derived from known rotavirus positive samples) were included on each assay plate. Results were analysed and expressed in the same manner as the coronavirus results. A total of 586 faecal samples were tested using a commercial ELISA kit for rotavirus and coronavirus (Pourquier® ELISA Calves Diarrhoea; Institut Pourquier®, Montpellier, France) according to the manufacturer's instructions. Briefly, 50 mL of dilution buffer and then 50 mL of undiluted faeces were plated in triplicate into the wells of a microplate coated with the appropriate antibody. The plate was held at room temperature (approximately 25°C) for 30 min and then washed manually using the wash solution provided. A unique conjugate (one for each of the three pathogens) was then added to the relevant wells for each sample and the plate was held at room temperature for 30 min. Following a final wash, tetramethylbenzidine (TMB) substrate was added to each well and the plate was incubated at room temperature for a further 10 min. A stop solution (0.5 mol/L H2SO4) was added and the optical densities were measured at 450 nm using an ELISA plate reader (Labsystems Multiscan Biochromatic; Labsystems, Basingstoke, UK). The ELISA reader optical density data was transformed, according to the manufacturer's recommendations, to calculate the sample to positive (S/P) ratios. Samples with an S/P ratio >7% were deemed to be positive in accordance with the manufacturer's recommendations. Faecal samples were tested for coronavirus (n = 132) and rotavirus (n = 122) using LAT dipsticks (Bio-X® Diagnostics; Jemelle, Belgium) according to the manufacturer's instructions. Briefly, a small sample of faeces was homogenised in a buffer solution and the dipstick was placed into the suspension. A sample was regarded as positive when both the control and positive indicator lines turned red. A sample was regarded as negative when only the control indicator line turned red. A test was regarded null (indicative of a faulty dipstick) when the control indicator line failed to turn red and the sample was retested using another dipstick. Coronavirus and group A rotavirus infection were detected in the faecal samples by all three detection methods (Table 1 ). Group C rotaviruses were not detected in any of the faecal samples by qRT-PCR, so all reference to rotavirus qRT-PCR results hereafter relate only to group A rotavirus. There was an inverse correlation between the S/P ratio and Ct values for detection of coronaviruses, suggesting agreement (r 2 = -0.07) between the different assays. Viral RNA was detected in 18.8% of samples that were negative in the ELISA and a proportion of these samples had low Ct values, suggesting a high viral load ( Figure 1 ). There was also an inverse correlation between the S/P ratio and Ct values for the rotavirus assays, suggesting agreement (r 2 = -0.40) between the different assays. Rotavirus was detected by qRT-PCR in 73.7% of samples that tested negative using the ELISA assay (S/P ratio <7%) ( Figure 2 ). The sensitivity, specificity and positive and negative predictive values for the coronavirus and rotavirus ELISAs compared with qRT-PCR assays are reported in Table 2 . There was poor agreement between the coronavirus LAT and qRT-PCR assays (Table 2) Limited agreement was also seen between the rotavirus qRT-PCR and LAT assays, but a trend was seen between Ct values and LAT results (Table 2) . However, the qRT-PCR-negative samples included 8/13 (61.54%) samples that were positive by LAT (Figure 4 ). The LAT and ELISA methods both use antibodies to detect viral antigen. However, poor agreement was observed between the two assays for coronavirus detection ( Figure 5 , Table 2 ). The proportion of samples positive for rotaviruses by LAT was high (37/38, 97.4%) for faecal samples that had an S/P ratio >50, but was low (3/18, 16.7%) for ELISA S/P ratios between 7 and 50 ( Figure 6 ). Overall agreement between the two assays was reasonable ( Table 2 ). Establishing a causal relationship between enteric pathogens and outbreaks of diarrhoea in calves is often difficult, because of the propensity for disease to be associated with multiple pathogens and because of the qualitative nature of the diagnostic tests available. The number of organisms shed during enteric infections varies over the course of the disease. A quantitative assay is desirable because it provides an indication of the number of organisms shed and thus a context for interpreting the significance of the finding. During the acute stage of rotaviral infection, viral shedding in faeces can reach 10 8 -10 12 virions/mL of faeces. [17] [18] [19] The pattern of shedding (i.e. peak viral load and duration of shedding) is partially determined by the colostral status of the animal. 20 When calves are infected with rotavirus or coronavirus, the number of organisms shed in the faeces increases over the first couple of days, reaching a peak between days 1-7 post-inoculation. Parreno et al. found a mean duration of rotavirus shedding of 6-10 days, but results were quite variable, with some animals becoming chronic shedders with virus present up to 3 weeks post-inoculation. 20 Experimental studies that examined faecal shedding of coronavirus indicate that coronavirus antigen is able to be detected throughout the period of diarrhoea. 21 As with many of the enteropathogens, rotavirus and coronavirus can be identified in the faeces of healthy and diseased animals, 2 so a simple dichotomous diagnostic test is insufficient to establish causality. The optimal method of establishing causality is to identify the virus in faeces and to examine affected tissues for classical histopathological changes or presence of the organism at necropsy. PCR-based assays have been recommended as the gold standard for diagnostic testing for many infectious diseases, 15 with qRT-PCR detection for rotavirus and coronavirus shown to be both highly sensitive and specific when the correct primers and probes are selected. 15, 22 These assays have the ability to increase the sensitivity of detection by up to 100-fold when compared with one-step RT-PCR. 23 The two real-time methods that have been used for detection of coronavirus in faeces are a TaqMan assay and a SYBR Green-based assay. 15, 24 Detection levels achieved using the TaqMan assay for coronavirus have been in the order of 10 1 -10 9 RNA copies and are 10-fold more sensitive than gel-based RT-PCR. 15 The assay using SYBR Green chemistry has similar detection levels, but is pan-reactive and designed to detect any coronavirus, unlike the bovine coronavirus-specific TaqMan assay. 24 In preliminary studies, we found that the SYBR Green-based assay performed erratically and could not be considered as a routine diagnostic test. The assay described by Decaro et al. 15 was compared with our TaqMan-based assay and produced very similar results, but the assay used here sometimes had slightly higher analytical sensitivity (Kirkland, unpubl. data) Guiterrez-Aguierre et al. were the first to describe a real-time TaqMan qRT-PCR for the detection of both human and animal rotaviruses. 22 Both the rotavirus and coronavirus qRT-PCR assays used in the present study were found to be invaluable and provided the capacity for relative quantification of the amount of viral RNA in the samples. Evidence of high viral load in samples tested using qRT-PCR gives the clinician more confidence that the virus identified is likely to be involved in the disease process. This does not exclude the possibility that lower concentrations of RNA may be significant, as the concentration can be influenced by factors such as the stage of disease, the quality of the sample collected and appropriate storage and handling during transport. The RNA extraction and qRT-PCR technologies used also allow large numbers of samples to be tested in a short time and, as a result of the relatively low labour input, can be conducted at much lower cost than conventional RT-PCR. Although there was a significant association between the ELISA and qRT-PCR for both viruses, the low r 2 values indicate that the results of one assay provided a poor prediction of the results that would be expected with the other. The qRT-PCR assays for rotavirus and coronavirus both detected a higher proportion of positive faecal samples. Similar findings have been reported by others comparing qRT-PCR and ELISA, and were not unexpected. 22 Although it is possible that the discrepant results were related to false positives in the qRT-PCR assays, we consider that this is unlikely, especially considering the high viral load detected in some of the samples. Further, it is known that, for rotavirus, qRT-PCR-positive/ELISA-negative samples in this study did contain viral RNA as demonstrated by further subtyping PCR assays (Kirkland, unpubl. obs.). A large proportion of samples (73.7%) that were rotavirus qRT-PCRpositive were negative by ELISA. The manufacturer of the ELISA kit evaluated in this study does not supply minimum levels of detection. Detection limits of published ELISAs have been in the order of 10 4 -10 6 virions/mL. 14, 25, 26 The high number of ELISA false-negatives is likely in part to reflect the higher analytical sensitivity of the qRT-PCR assay. Contradicting this argument, a proportion of the ELISAnegative/qRT-PCR-positive samples had low Ct values, indicating a high viral load. However, the different analytes that are detected by these assays should not be overlooked and it is possible that the relatively stable double-stranded RNA of rotavirus may persist under adverse conditions for longer than the protein viral antigens. The rotavirus ELISA assay used in this study targeted the VP7 outer capsule protein. Degradation of the outer virion protein has been described as a cause of false negatives in rotavirus ELISA assays, 27 and stabilisation of the outer capsid can be achieved by including calcium chloride. 28 False-negative results may also reflect the presence of complexing antibodies, high concentrations of faecal material, decreased affinity of detecting antibodies or the presence of proteases. 5, 9, 21, 29, 30 The poor sensitivity of the coronavirus ELISA assay may also in part reflect the limits of detection. However, the poor correlation between the ELISA and qRT-PCR suggests that the variance in the results is likely to reflect other variables. The commercial ELISA evaluated in this study used a polyclonal antibody against the spike protein (S) protein. The viral envelope of coronavirus consists of the nucleocapsid (N) protein and four structural proteins (the haemagglutinin-esterase (HE), S protein, small membrane protein (E) and transmembrane protein (M)). [31] [32] [33] The S protein of coronavirus is a common antigen used for ELISA assays. 12, 34 Antigenic variability in the S protein has been observed, because of polymorphism in the S protein, and this variation has been attributed to a single point mutation in the S gene, 21, 35 which may lead to altered antibody binding. False negatives may also be related to loss of S protein during degradation of virions or during transport and processing of samples. 36, 37 The development and application of quick, calf-side diagnostic tests is appealing to veterinary practitioners and producers because it avoids the inherent delays associated with shipping samples to diagnostic laboratories. For these tests to provide benefit to livestock producers, it is important that users appreciate each test's limitations. Validation data for commercially available diagnostic tests are often scarce and may be difficult to obtain. According to the manufacturer (Bio-X® Diagnostics; Jemelle, Belgium), the reported sensitivity and specificity of the rotavirus LAT when tested against double-stranded RNA electrophoresis on polyacrylamide gel was 96% and 100%, respectively. The reported sensitivity and specificity of the coronavirus LAT when tested against RT-PCR was 63.6% and 97.4%, respectively. The sensitivity, specificity, positive and negative predictive values of both the rotavirus and coronavirus LAT assays were lower in the present study. The low positive predictive value of the coronavirus LAT assay (36.7%) reflected a relatively low prevalence of coronavirus in the population sampled, as well as low test sensitivity and specificity when compared with qRT-PCR. The negative predictive value of the coronavirus LAT (72.6%) was considerably higher, but below the negative predictive value of the ELISA (81.3%) when compared with qRT-PCR. The higher prevalence of rotavirus infection in the population sampled provided for a higher positive predictive value (81.4%) of the rotavirus LAT when compared with qRT-PCR, but given the high population prevalence and prior probability of infection, the test provided little additional diagnostic information. The negative predictive value of the rotavirus assay was also extremely low (8.9%) when compared with qRT-PCR, providing essentially no diagnostic value. Recently, Klein et al. evaluated a commercial rotavirus and coronavirus dipstick using faeces from 180 calves (98 with diarrhoea) aged 1-42 days against a RT-PCR assay. 38 The coronavirus assay in that study showed a greater sensitivity (60%), specificity (96.4%), positive (91.3%) and negative predictive values (79.1%) than the coronavirus LAT assay examined in our study. The rotavirus assay in that study also showed a greater sensitivity (71.9%), specificity (95.3%) and negative predictive value (94%), but a lower positive predictive value (76.7%) than the rotavirus LAT assay in our study. Possible reasons may be the increased limit of detection in qRT-PCR compared with normal RT-PCR 23 and a different prevalence of viral pathogens in the two studies (i.e. the prevalence of coronavirus according to RT-PCR in the study of Klein et al. was 38%, whereas it was 22% in our study, and the prevalence of rotavirus in their study was 38.9% compared with 80% in our study). A possible explanation for the poor performance of the dipstick is that the antigen against which the LAT dipsticks were targeted may have been damaged in transport. It is possible that better results may have been achieved if the dipsticks had been used at the point of sample collection. The application of the tests in the current study was consistent with the use of the dipsticks in a veterinary clinic or diagnostic laboratory. The sensitivity of the LAT when compared with ELISA for the detection of rotavirus was moderate (67.8%), with very good specificity (95.2%). A previous study comparing the detection of rotavirus using LAT found a sensitivity and specificity of 70% and 100%, respectively, when compared with electron microscopy of 74 faecal samples from calves with acute diarrhoea. 6 Luginbühl et al. 39 also found that the same rotavirus dipstick as that studied by Klein et al. 38 had much lower sensitivity (57%), but the specificity was greater (100%) when compared with an ELISA for the detection of antigens in the faeces of 60 calves. Possible reasons for the difference in our results are that the sample size was much greater in our study and a more sensitive technology was used as the reference assay. The sensitivity and specificity of both the commercial ELISA and LAT assays evaluated in this study were low compared with qRT-PCR. The low positive and negative predictive values of the assays suggest that they were of limited diagnostic benefit in the population sampled. The qRT-PCR assays offer an alternative diagnostic methodology that is both sensitive and semiquantitative, and thus more informative for clinicians interpreting the significance of a pathogen during disease investigations. Further studies are warranted to develop a better understanding of the clinical relevance of the different levels of viral RNA detected by qRT-PCR assays. When this information becomes available, the higher cost of qRT-PCR assays may be offset by both their superior diagnostic performance and the value of the quantitative information that can be obtained.
Coronavirus disease 2019 (COVID-19) is a highly transmissible disease caused by a novel coronavirus that emerged in Wuhan, China, and was named as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) by the International Committee on Taxonomy of Viruses (ICTV) (1) . The disease rapidly became globally disseminated and is confirmed to have presently killed nearly 0.75 million people as of 10th August 2020 out of more than 20 million confirmed cases and has triggered a global panic. The pandemic has created a significant impact on the survival and sustenance of human population, with several questions being raised upon the global scientific community as far as the pandemic preparedness is concerned. The origin of the infection may have been the transmission of SARS-CoV-2 from wild animals available for sale in the Huanan seafood market of Wuhan to humans (2, 3) . The pneumonia caused by SARS-CoV-2 was designated as COVID-19 by the World Health Organization (WHO) on 11 February 2020. Bats may be the natural reservoir of the virus, and the search for an intermediate host is underway (4) . Pangolins have been suggested as the probable intermediate host. However, there is no conclusive evidence yet (5) . Spillover events and zoonotic links have been implicated in the origin of SARS-CoV-2, and the virus infection has been reported from a few animal species (6) . The search for effective therapies and vaccines is going on worldwide (7) (8) (9) (10) . The large number of fatalities caused by SARS-CoV-2 attests to the severe global impact of the pandemic, irrespective of age, race, sex, and physiological conditions. A report has suggested that everyone will be exposed to the SARS-CoV-2 and that most of the world's population will be infected with COVID-19 (11) . Although COVID-19 affects all ages, individuals having comorbidities, such as diabetes, asthma, hypertension, cerebro-cardiovascular abnormalities, cancer, as well as immunocompromised and elderly people, are affected more severely, and exhibit a higher mortality rate (12) (13) (14) . Age is believed to be a significant determinant of the clinical outcome, severity, disease course, and prognosis of the disease (14, 15) . However, many facets need to be discussed. This review highlights SARS-CoV-2 infection in the geriatric population, risk factors, pandemicrelated concerns, and the attention required for the elderly. It also considers the development of an effective and safe vaccine for them. The elderly, especially those with underlying diseases, are more susceptible for COVID-19 (2, 14, 16, 17) . Initial studies of COVID-19 revealed more cases in people 49-55 years of age (2, 16) . Subsequent studies involving more people demonstrated that the prevalence of the disease was higher in individuals ≥60 years of age than in younger individuals (14, 18) . In developed countries with a very high elderly population, mortality due to COVID-19 was reportedly 83.7% for those >70 years and 16.2% in people younger than 69 years (19) . Underlying diseases were noted in 32-51% cases (2, 16) . A study also found that SARS-CoV-2 infection is more often associated with detrimental effects in the geriatric population than in younger age groups (20) . A retrospective study of 85 patients who had died of SARS-CoV-2 infection in Wuhan reported a median age of the patients was 65.8 years. Among these individuals, underlying non-communicable chronic conditions, such as hypertension, diabetes, and cardiopulmonary diseases, were the most commonly observed comorbidities (21). Most of the patients died due to the multiple organ failure. Clinical manifestations were reportedly more severe and the disease course was more prolonged in the elderly, which required closer monitoring and more medical interventions (14) . The geriatric population faces special risks for COVID-19 (22) . Predisposition and severe outcomes enhance the risks for elderly people (22) . Older age and underlying diseases have been noted as the main factors for vulnerability to COVID-19. An age ≥60 years is a major a risk factor (14, 18) . Comorbidities are the main underlying etiologies collectively in 32-60% of cases. Specific rates include 16-20% for diabetes, 15-41% for hypertension, and 14-15% for chronic obstructive pulmonary disease and cardiovascular disease (2, 13) . Presumably a consequence of advancing age is an inevitable worsening of health related to vital organs. Furthermore, an age-related diminishing physiological functions of multiple organs include the respiratory system and the resulting impaired mucocilary clearance of foreign particles or micro-organisms, is expected (23, 24) . Aging alters pulmonary physiology, pathology, and function during lung infections, which affects responsiveness and tolerance in older patients (14, 25) . Angiotensin converting enzyme (ACE) 2 expressed on myocytes, renal endothelial cells, and epithelial lung cells acts as a receptor for SARS-CoV-2 (26) . Old age has also been associated with weakened physiological functioning of various vital organs and innate/adaptive immune defense. Furthermore, in association with underlying chronic diseases, acquisition of infections is more likely (27) . Aging increases the production of interleukin-6 (IL-6) in the brain and the microglia show increased expression of voltage-activated K + channels, potentially enhancing IL-6 production and neuroinflammation with age (28) . In innate and adaptive immunity, regulation of membrane potential and calcium influx are determined by the equilibrium potentials of K + (KV1.3, KCa3.1), Na + (TRPM4), and Cl − channels in the plasma membrane. Altered immune function through ion signaling can have profound effects on the increased susceptibility to COVID-19 (29) . Other risk factors include poor nutrition, dementia, dehydration, and various clinical complications, especially in frail and bedridden patients (30) . A lack of a timely diagnosis and therapeutic and preventive measures increases the risk of a severe infection. In addition to compromised organ function and immunity in the elderly, pathophysiological susceptibility also increases their vulnerability, attack rate, and infectivity by SARS-CoV-2 (31) . The pneumonia severity index (PSI) score is higher in the elderly than that in the young and middle-aged individuals (15) . In one study, the proportion of patients with PSI grade IV and V was significantly greater in the elderly group than that found in the young and middleaged groups (14) . Severe complications due to COVID-19 in older people can include acute respiratory distress syndrome, multiorgan failure, and death, especially in cases with underlying comorbidities (22) . The strict and prolonged lockdown initiated in many locales to prevent the spread of SARS-CoV-2, restricted physical inactivity and produced social isolation-associated stress. These factors may further deteriorate the health of older people, contributing to adverse health outcomes in this population (32) . Increased age is a major risk factor for COVID-19 due to various factors including weakened immune system, physical inactivity, and stress. These factors require special attention in addressing the pandemic among the elderly. Social distancing and disconnection can predispose to depression and anxiety in the elderly that may further increase risk of adverse outcomes of COVID-19 (33) . In addition, secondary complications due to general care and management also need to be addressed in the elderly. These complications include venous thromboembolism, catheterrelated bloodstream infection, pressure ulcers, falls, and delirium (22) . The clinical presentation of COVID-19 infection is variable and ranges from a lack of symptoms to symptoms that are mild, severe, and life-threatening. In a study including 72,314 COVID-19 patients, mild, severe, and critical forms of the disease were reported in 81.4, 13.9, and 4.7% of the patients, respectively (20) . In addition to fever and cough, dry cough is an important and common clinical manifestation. Coughing is reported in 60-80% of COVID-19 patients (16, 34) . Other respiratory symptoms include like dyspnea, sore throat, and rhinorrhea (7, 16, 35, 36) . Clinical manifestations have included anorexia, myalgia, asthenia, headache, anosmia, diarrhea, and cardiovascular complications (36, 37) . The most common symptom of infection is fever. However, elderly patients frequently have a low intensity fever or no fever, even in severe cases (38) . In one study, 77.7% of 18 COVID-19 patients >60 years of age manifested fever. The finding suggests that SARS-CoV-2 infection is not necessarily accompanied by fever (14) . Among clinical presentations of COVID-19, presence can differ between elderly and young/middle-aged individuals (14) . SARS-CoV-2 infection reportedly involves elderly men more often than elderly women; however, infection in elderly patients is also reported in Middle East respiratory syndrome (MERS)-CoV (39, 40) . Age-specific detailed analysis of COVID-19 symptoms has not been performed. However, the possibility of non-specific and atypical clinical symptoms in elderly patients is highly expected, as is the case in other diseases (41) . Moreover, higher frequency of severe disease and mortality is expected along with need for intensive care unit (ICU) hospitalization in elderly patients. The most frequent laboratory hematological finding in critically ill COVID-19 patients is severe lymphocytopenia (<800 cells/µL). This seems to be more pronounced in older patients (14) . Compared to the people <60 years of age, those who are >60 years of age display higher levels of blood urea nitrogen, lactate dehydrogenase activity, and inflammatory indicators (14) . The greater involvement of pulmonary lobes in bilateral lesions and more frequent bacterial co-infection have been reported (14) . Creactive protein (CRP) was found to be significantly higher and lymphocyte proportion significantly lower in elderly individuals compared to the CRP and lymphocyte proportion in younger and middle age individuals (15) . From what is known, a high death toll among the global geriatric population due to SARS-CoV-2 can be expected. The severe impact of the COVID-19 pandemic is more frequently documented in developed countries with a higher lifeexpectancy, such as Italy. A 7.2% overall case-fatality rate was reported in Italy, which was significantly greater than the rate of 2.3% in China (42) . Age stratification of data revealed a nearly identical case-fatality rate in Italy and China for individuals ≤69 years of age. The rate was higher in Italy for those ≥70 years of age, particularly in those ≥80 years of age (42) . Moreover, of 1625 fatal cases of COVID-19, 139, 578, and 850 patients were 60-69, 70-79, and ≥80 years of age, respectively (42) . Another study in 4021 positive cases indicated a mortality rate of 5.3% in the geriatric population (≥60 years of age) compared with the rate of 1.4% in young and middle-aged individuals (14) . In New York, among 5700 hospitalized COVID-19 patients, the in-hospital mortality rate was 15.8, 32.2, 54.3, and 52.3% for adults aged 60-69, 70-79, 80-89, and >90 years, respectively (43) . Older COVID-19 patients with dementia may exhibit mild and atypical symptoms, including diarrhea or drowsiness. However, such old and frail patients have fewer chances to survive the COVID-19 infection. Adequate and appropriate supportive measures, and clinical care may improve their survival rate, even without the use of targeted therapies. Moreover, few COVID-19 patients may die due to worsening of the underlying comorbid health conditions during the infection, rather than by the infection itself (30) . In this context, poor nutrition, dementia, dehydration, and other clinical complications are common in frail and bedridden patients, even with mild infective diseases, and wellestablished risk factors are responsible for worsening health and death, if adequate supportive measures are not provided in time (30) . Immune dysfunction and severity of inflammation are other reasons for increased mortality in COVID-19 patients (2, 16, 44) . Antibody-dependent enhancement (ADE) due to cross-reactive antibodies produced in the course of previous infections by other viruses may be a possible cause for this phenomenon (45) . Although COVID-19 results in acute respiratory distress syndrome due to acute lung injury in elderly people, which causes many of the deaths, heart attack could be a principle reason for mortality in older people affected with COVID-19, irrespective of the occurrence of pneumonia (46) . Among older COVID-19 patients, higher Sequential Organ Failure Assessment score and elevated d-dimer (>1 µg/mL) were revealed as markers for an increased risk of death. These markers could be used to identify patients with poor prognosis at an early stage (47) . Another study involving 179 patients with preexisting concurrent cardiovascular or cerebrovascular diseases showed an association of high cardiac troponin with a high risk of mortality (21). The current lack of specific vaccines for SAR-CoV-2 or any efficacious vital medications are the main challenges faced in the treatment of COVID-19. Immune-compromised elderly people are especially at risk. An effective vaccine may take more than a year to become widely available. However, considering the rapid pace of vaccine development, there are grounds for optimism concerning the availability of an effective COVID-19 vaccine sooner rather than later (48) . In the present scenario, self-quarantine or self-isolation is enforced in most countries to control or mitigate the overwhelming detrimental effects of this pandemic. The recommended measures to prevent the spread of this deadly virus include a regular use of personal protective equipment (PPE), physical distancing, and self-isolation. Social distancing emphasizes reducing the number of cases and preventing community spread. However, this social disconnection has led to an enhanced development of mental deterioration, depression, and suicidal attempts in the geriatric population (49) . Self-quarantine during this critical phase of COVID-19 outbreak is specifically oriented toward "social distancing, not social isolation." Hand hygiene and respiratory etiquette are also essential recommendations for older people (50) . Disinfection of the surroundings in which geriatric people are living should be frequently carried out to prevent contamination of surfaces and reduce chances of infection (50) . Healthcare workers, family members, and caregivers of older people should actively implement these basic protocols to prevent the COVID-19 infection among the older population (51). Initial results of clinical trials on SARS-CoV-2 spike-based DNA vaccine and inactivated virus vaccine were reported to be safe and induced good neutralizing antibody titers (52, 53) . The receptor-binding domain (RBD) of SARS-CoV-2 is reportedly a potential antigen and has been suggested to be a crucial subunit vaccine candidate (54) . Moreover, an mRNA-based vaccine (mRNA1273-COVID-19 vaccine) has so-far proven to be safe and is in clinical trial stage (55) . A DNA plasmidbased vaccine for COVID-19 designated INO-4800, is being developed by INOVIO Pharmaceuticals, and will be administered by two intradermal injections followed by electroporation (55) . Issues with vaccination in older people (who will be the main target of vaccination) include their weaker immune system, which can compromise the recognition and response to novel viruses (56) . In addition, amplifying the strength of vaccine may have side-effects in older people and weaker vaccines may require regular boosters/doses. Hence, when vaccines are developed, they will need to be effective for older people (57) . Some trials have focused on enrolling older adults for vaccine trials, taking into account the weaker immune system in these individuals. In this context, a Chimpanzee Adenovirus Vector (ChAdOx1)-based vaccine under development for SARS-CoV-2 by the Jenner Institute, Oxford has reached the phase I/II clinical trial stage. Chimpanzee Adenovirus Vector based vaccine is a non-replicating virus and is reported to generate a strong immune response. Thus, it can be safely used in older individuals along with children and individuals with comorbidities (55, 58) . Another adenovirus vector-based vaccine (Ad5-nCoV) is among the top contenders for COVID-19 vaccine according to the WHO. However, immunity against Ad5 vector along with safety of the vaccine are major concerns that must be addressed before this vaccine could be used in the geriatric population (59) . SARS-CoV-2 may produce varied pathogenesis, immune responses, and outcomes in older people. The results can include a more severe disease, higher mortality, and prolonged disease (14, 15, 60) . In older individuals, a dysfunctional immune system can lead to dysregulated immune response characterized by excessive infiltration of immune cells, cytokine storm, pulmonary edema, pneumonia, widespread inflammation, and multiorgan failure. A healthy immune response usually clears the infection quickly and inactivates the virus with neutralizing antibody with minimal inflammation and tissue damage. However, slower responding, less coordinated, and less efficient immune responses in older individuals can render them more susceptible to emerging infections (60) . The inability to switch from innate to adaptive (little to no antibody production) immunity in SARS-CoV infections, especially in older individuals, may need revision (60) . In general, most vaccines may not induce effective immune response in older people. However, some vaccines work very well in elderly people (61, 62) . For example, the Shingrix vaccine for shingles was found to be 90% effective in people >70 years of age. Immune responses vary greatly among elderly people. Understanding this variability can help in developing new and Frontiers in Public Health | www.frontiersin.org improved vaccines to protect the most vulnerable elderly people (63) . Age-appropriate adjuvants need to be explored (60) . The incidences of physical violence, discrimination in terms of maltreatment and health care facilities have increased during this global pandemic. However, the pivotal role played by the elderly in retired scientific communities, health workers, and others during this pandemic in terms of sharing their past experiences and providing moral support to those worried about COVID-19 and to their family members cannot be ignored. Geriatrics witnessed World War-2. This experience could help guide policymaking in the aftermath of the current pandemic with the goal of lessening the existing global socioeconomic disparity (64) . The WHO must issue different guidelines concerning the special COVID-19 related care of older and geriatric individuals people with disabilities (65) . These individuals are vulnerable and they need special attention in the form of social support interventions. The significantly adverse impact of COVID-19 on older people globally, whether they reside in developed or developing countries, can be attributed to a lack of preparedness and the lack of recognition of geriatric health. Fatalities in the geriatric population can be minimized to some extent by providing immediate adequate supportive measures, good health facilities, nursing homes, and care units (66) . Geriatrics must be acknowledged and honored for their contributions toward society in their functional life by assisting them to maintain social relationships along with desirable social distancing. The extensive period of lockdown in nations has made it difficult for some elderly people to obtain food, especially those living alone or those who do not have family members nearby. It is important for citizens, civic bodies, non-governmental organizations, as well as industry leaders to come together and help them in this vulnerable time. Online platforms need to be explored for betterment of older people so they do not feel isolated and forgotten, to foster a sense of belonging, and to provide social support (67) . Older people may not be familiar with online technologies, including smart phones and the internet. To reduce depression and mental stress in the geriatric population, regular behavioral therapy via online motivation and monitoring programs should be implemented. A pilot randomized control trial that assessed the feasibility of reducing loneliness by internet-based cognitive and behavioral interventions reported encouraging results (68) . During the current pandemic, elderly people must be motivated to use cell phones, online games, radios, television, to engage in indoor exercise like yoga, and to listen to music (69) . Promotion of proper sleep, balanced nutrition, physical activities, and social care in the life style of the geriatric population can reduce the negative effects of SARS-CoV-2 infection. A multidimensional and age-friendly approach with better health care strategies and minimal physical and physiological stressors can be helpful. A number of social welfare programs specifically catering to the elderly should be developed at the local, state, and national levels by various government and non-government organizations. The assistance provided by younger individuals cannot be overstated. This assistance includes running errands, acquiring and delivering groceries, timely provision of medications, and transportation during medical emergencies. The COVID-19 pandemic has highlighted the need for adequate nursing care for the elderly that is evidence-based and tailored to the needs of this population. A strong public health response and global preparedness to protect the elderly at risk for infectious diseases, including COVID-19, are needed (70) . A commentary contributed by 20 international researchers in the field of aging raised the issues of the lack of preparation for crises, such as COVID-19, in long-term care homes and the initial perception of the public that the virus was a problem of elderly people (71) . Higher mortality rates among the elderly have devastating consequences in families, as the elderly are a source of generational knowledge and wisdom, and contribute to the workforce critical for the economy and our family (71) . Mental health is also one of the important cornerstones of public health for the elderly. There is a need for regular telephone counseling sessions, contact with family members, provision of relevant and updated information on the pandemic, continued supply of general medications, meeting psychological needs, and instilling a sense of respect and dignity to maintain the health mental status among the elderly (72) . The COVID-19 pandemic has revealed the need for a new era of care for older people, including the use of communication technology, more home-based care, and novel approaches to enhance the resilience of the elderly to stress and depression (73) . This resilience will build stronger elderly communities with better physical and mental health. Figure 1 provides an overview of the COVID-19 pandemic in older people, associated risk factors, related worries, need of special attention and care during the pandemic, and the development of effective and safer vaccines and mitigation strategies. The COVID-19 pandemic has spread unimpeded. Millions have been infected with SARS-CoV-2 and over 715,000 hundreds have died. Numerous factors are involved, as reflected in the higher rates of infection in certain classes of society and in different locations. Although individuals of all age groups with diverse physiological conditions are susceptible to infection by the virus, the severity and mortality of COVID-19 is higher in geriatric individuals. Old age, weaker immunity, and underlying diseases are the main predisposing factors for these people. Immunosuppression, decreased organ vitality, and poor healthcare management have increased the suffering of the elderly. Besides increased susceptibility/pathogenicity or infection rate of the virus, dysregulated immune response and hyperinflammation significantly increase the pathophysiology of COVID-19, resulting in higher disease severity and consequently increased mortality in the elderly. Prevention measures need to focus on special requirements of health, nutrition, psychological, and mental well-being of the geriatric population. Physical isolation, rather than social distancing, along with proper hand and respiratory hygiene need to be supported by providing personnel protective equipment, environmental disinfection, and a nutritious diet. Regular behavioral therapy via online motivation and monitoring for older people who are not well-versed with online technologies may reduce depression and mental stress in the geriatric population, and increase their survival. A multidimensional age-friendly approach will certainly minimize physical and physiological stress, and help diminish the toll of the pandemic. Regular monitoring and caring of elderly people will be beneficial in easing COVID-19 related worries, and will facilitate better management of the pandemic. Therapeutics and vaccines must be designed with the elderly in mind to avoid a heavy death toll. Ignorance and insufficient healthcare monitoring and services to the geriatric population may lead to increased mortality. Therefore, health agencies worldwide must pay attention to the geriatric population and issue guidelines specific for this age group. KD: validation, conceptualization, writing-original draft, writing-review and editing, and visualization. SP, RK, JR, MY, AK, RT, JD, SN, and RS: validation and writing-original draft. HH: validation, writing-original draft, and writing-review and editing. All authors: contributed to the article and approved the submitted version. All the authors acknowledge and thank their respective Institute and Universities.
With the advent of highly active antiretroviral therapy (HAART), human immunodeficiency virus type 1 (HIV-1) can be controlled for prolonged periods, [1] although the virus cannot be eliminated [2] and treatment failures occur due to development of drug-resistant mutations. [3] Chronic immune hyperactivation and raised T-cell turnover due to continued viral replication and antigenic stimulation are present even after HAART has decreased the viral load to undetectable levels. [4] Both proinflammatory and regulatory cytokines are produced during chronic immune stimulation. Proinflammatory cytokines, such as interleukin (IL)-1, IL-6, and tumor necrosis factor-alpha (TNF-alpha), contribute to tissue pathology, especially in the brain, [5] and can induce transcription of latent HIV-1.[6,7] Type 2 or regulatory cytokines, such as IL-4, IL-6, and IL-10, can suppress type 1 cytokines and induce polyclonal B-cell activation, [8] lymphomagenesis, [9] autoantibody production, [10] and manifestations of allergy. [11] Type 1 cytokines, such as IL-12, interferon (IFN) gamma, and IL-2, are important for antiviral cell-mediated immunity. [12] During the long course of HIV-1 infection, type 2 cytokines gradually come to predominate over type 1 cytokines, [13] [14] [15] [16] although this finding is not universally accepted. [17] There have been few studies of in vitro cytokine production in neonatally acquired HIV-1 infection in Asian or Chinese children. The enzyme-linked immunospot (ELIS-POT) system for measuring unstimulated or mitogen-activated cytokine secreting cells has not been evaluated in this context. We wished to know whether monitoring cytokine production in addition to CD4+ cell counts and viral load could provide additional useful information in pediatric patients with HIV-1 infection being treated with HAART. We hoped to identify cytokine profiles that are characteristic of either clinical improvement or disease progression, so that manipulation towards the desirable profile might be attempted. This study was approved by the Institutional Review Board of Hong Kong West Hospital Cluster and The University of Hong Kong, and informed consent was obtained from the parents of all subjects. Clinical findings in 8 of the patients have been described previously. [18] Ten Asian and 2 Eurasian children, 4 girls and 8 boys, were infected by mother-to-child-transmission of HIV-1. They were initially diagnosed between 1996 and 2002 at age 364 (median, 32) months and they have been followed for 961 (median, 44) months (Table) . At the time of diagnosis, 9 children had low CD4+ cell counts (compared with the age-specific normal range [19] ) and the median plasma HIV-1 RNA level was 500,000 copies/mL (110,0001,300,000). All children had lymphadenopathy and/or hepatosplenomegaly at diagnosis. One girl (patient 3) developed NKT-cell lymphoma which caused her death, the only fatality during the study period. Most of the patients had infectious complications, including Pneumocystis carinii pneumonia (1), viral pneumonia (1), disseminated Penicillium marneffei (1), thrush (4), tinea capitis (1), and herpes simplex (1) . Other complications included neutropenia in 1 patient, hepatitis and anemia in 1, and asthma and/or rhinitis in 3. Patients were started on HAART immediately after confirmation of HIV-1 infection and were treated with 2 nucleoside reverse transcriptase inhibitors (zidovudine, lamivudine, didanosine, stavudine, and/or abacavir) plus 1 protease inhibitor (indinavir, nelfinavir, Kaletra (lopinavir + ritonavir), ritonavir, or amprenavir) or the nonnucleoside reverse transcriptase inhibitor nevirapine. Details are given in the Table. Patients were examined and blood for hematologic, virologic, and immunologic evaluation was taken every 26 months. The first cytokine evaluation was performed within 1 month of starting HAART in 7 patients, within 24 months in 3 patients, and after 16 and 19 months in 2 patients. Numbers of cytokine-secreting cells in unstimulated cultures or cultures stimulated with T-cell activators phytohemagglutinin (PHA), Concanavalin A (Con A), or monocyte activator Staphylococcus aureus Cowan I (SAC) were determined using ELISPOT assays. [20, 21] Details of our adaptation of this method and its specificity and reproducibility (intra-and interassay CVs 8.8 ± 5.8% and 13.2 ± 4.9%, respectively) have been reported. [22] [23] [24] [25] [26] Results for normal controls evaluated over the study period remained stable within our established reference ranges. Briefly, peripheral blood mononuclear cells (PBMCs) were separated over Lymphoprep (Nycomed; Oslo, Norway) within 1 hour of blood collection and added to 96well Multiscreen plates (Millipore; Bedford, Massachusetts, USA) which had previously been coated overnight at 4°C with cytokine capture antibodies (Pharmingen; San Diego, California, USA) at 2 (IL-4, IL-6, IL-10), 4 (IL-12, TNF-alpha), or 8 (IFN-gamma, IL-2) mcg/mL in 0.1 M NaHCO 3 , pH 8.2, and blocked with 5% fetal calf serum (FCS) in culture medium RPMI 1640 for at least 1 hour at 37°C. Duplicate cultures of 10 4 (for IL-6 and TNF-alpha) or 10 5 (for IFN-gamma, IL-2, IL-4, IL-10, and IL-12) viable cells/well in RPMI + 5% FCS with or without PHA at a final concentration of 10 mcg/mL, Con A at 20 mcg/mL, or SAC at 0.001% v/v were incubated for 22 hours at 37°C in 5% CO 2 . Cells were then washed out with 0.01 M phosphate-buffered saline containing 0.05% Tween 20 (PBS-T) and plates incubated sequentially with biotinylated detection anti-cytokine antibodies (Pharmingen), 0.5 mcg/mL in PBS-T for 90 minutes, streptavidin-alkaline phosphatase (Sigma; St. Louis, Missouri, USA), 1/400 v/v in PBS-T for 60 minutes and 5-bromo-4-chloro-3-indolylphosphate-nitroblue tetrazolium (Calbiochem; La Jolla, California, USA) for 20 minutes, all at room temperature. Plates were washed extensively with PBS-T between each incubation and with saline to remove phosphate prior to addition of phosphatase substrate. Color development was stopped and pathogens inactivated by immersion in 2% Clorox bleach followed by rinsing under the tap and allowing plates to dry for 1 hour. Blue spots corresponding to each cytokine-secreting cell were counted by microscopy and results expressed as ELISPOTs/10 6 PBMCs. CD3+4+ T-helper cells and CD3+8+ T-cytotoxic cells were enumerated using commercial monoclonal antibodies (Beckman Coulter; Miami, Florida, USA) by dual color flow cytometry (EPICS XL-MCL, Coulter). White cell and differential counts were performed by standard methods. HIV-1 RNA quantitation was by Amplicor HIV-1 Monitor (Roche Diagnostics Corporation; Branchburg, New Jersey, USA). The standard method, performed according to the manufacturer's recommendations, has a measuring range of 400750,000 RNA copies/mL. Correlations between numbers of cytokine-secreting cells and proportions and absolute numbers of CD4+ and CD8+ cells, CD4:CD8 ratios, and virus load were evaluated by multiple regression analysis, with or without log-arithmic transformation, and linear regression lines were plotted. Parametric rather than nonparametric statistics were used, despite small numbers of patients, because we wished to derive formulae for estimation of cytokine levels predictive of viral load or lymphocyte subset count. Log transformation was performed for viral load data because skewness and kurtosis of raw data were 3.7 and 12.8, respectively; these became 1.2 and 0.5, respectively, after log transformation. Curves of numbers of cytokinesecreting cells plotted against length of treatment with HAART were fitted by nonlinear regression. The statistical software used was GraphPad Prism Version 4.00 for Windows (GraphPad Software; San Diego, California, USA, www.graphpad.com). Twelve Asian or Eurasian children infected with HIV-1 by mother-to-child transmission were treated with HAART from the time of diagnosis. They were 360 (median, 25) months old when initially diagnosed and were therefore heterogeneous with regard to immunologic maturity, duration of infection with HIV-1, and extent of immunodeficiency due to HIV-1. They all had high viral loads and most had low CD4+ cell counts when diagnosed and entered into the study. One child died of lymphoma at age 29 months after she had received HAART for 9 months, during which time her CD4+ cells increased and the viral load decreased to undetectable. At the end of the study, the 11 surviving patients were well and thriving. Seven had normal or higher-than-normal circulating CD4+ cells/mcL, but 4 patients still had reduced numbers and/or percentages. Plasma HIV-1 RNA was consistently below the level of detection in all but 1 of the surviving children when the study closed. Undetectable plasma Multiple regression correlation of numbers of IL-2-secreting cells/106 PBMCs with CD4+ and CD8+ T-cell counts in 12 pedi-atric patients treated with HAART HIV-1 RNA was achieved in 255 months (median, 9.5 months). For each cytokine and culture condition studied while patients were receiving HAART, 112 corresponding values of CD4+ and CD8+ cells and 96 corresponding values of plasma HIV-1 RNA copies/mL were available. All of these data were used to examine whether cytokine production correlated with disease progression. IFN-gamma, IL-2, and IL-4 ELISPOTs were undetectable in unstimulated PBMCs, as reported previously. [22] [23] [24] [25] [26] Numbers of PHA-or Con A-stimulated IL-2-secreting cells increased during recovery from CD4 deficiency and correlated directly with CD4 and CD8 absolute counts, CD4 percentages, and CD4:CD8 ratios and inversely with CD8 percentages by multiple regression analysis. The data could be described by IFN-gamma-, Con A-induced IL-4-and unstimulated IL-10-secreting cells increased significantly as virus load fell ( Figure 4 ). The data were described by the following equation: log 10 viral load = 7.4530.6207 (log 10 IL-4 Con A) 0.9504 (log 10 IL-4 Con A) + 0.5434 (log 10 IL-10 unstimulated). All of the data from ELISPOT assays were plotted against duration of HAART. Numbers of IFN-gamma, IL-2, IL-4, and IL-12-secreting cells tended to increase for the first 34 years of treatment but declined thereafter. Changes in IL-6, IL-10, and TNF-alpha-secreting cells over time were less apparent. See Figures 5, 6 , 7, 8 and figures 9, 10 and 11. The effect of HIV-1 on maturation of the immune system in general and of cytokine production in particular is not well understood, especially in the context of treatment with HAART. We wished to know whether regular monitoring of mitogen-induced cytokine production in addition to CD4+ cell counts and virus load would be a valid measure of immunologic competence and therefore a useful additional parameter for clinical monitoring. We also looked for correlations between cytokine production, viral load, and CD4+ cell numbers in the hope of identifying cytokine profiles associated with favorable outcome. However, we were limited to only 12 HIV-infected children available for study in Hong Kong, and statistical bias could have occurred due to heterogeneity with regard to immunologic maturity at the time of diagnosis, duration of infection with HIV-1, and extent of immunodeficiency when starting HAART. IL-2 was the only cytokine of those studied that correlated positively with increasing CD4+ T-cell percentage and absolute number and increasing CD4:CD8 ratios. Treatment with exogenous IL-2 has been shown to increase peripheral expansion of CD4+ cells. [27] IL-2 production also correlated with CD8+ T-cell increases but, surprisingly, because this population includes the major cytotoxic effector cells against HIV, it did not correlate with viral load. HIV-1 RNA copies/mL correlated inversely with Con Ainduced IFN-gamma, Con A-induced IL-4, and unstimulated IL-10, suggesting that these cytokines might be involved in the control of HIV-1 levels. It is impossible to distinguish between the possibility that high levels of virus suppressed production of these cytokines and/or that virus survived better when production of these cytokines was limited. In contrast to our findings, a previous study reported that plasma IL-10 declined during adequate virologic and immunologic responses in HAARTtreated adults. [28] Differences in race and age of patients in the 2 studies may have contributed to these conflicting findings. Also in contrast to our study, IFNgamma [29] and TNF-alpha [29, 30] declined during adequate virologic and immunologic responses in HAARTtreated adults. However, Reuben and colleagues [31] found increased plasma IFN-gamma after virus suppression in pediatric patients and Resino and coworkers [32] found lower PHA-induced TNF-alpha and IFN-gamma in PBMCs, P = .0231; (B) Con A-induced IL-4-secreting cells/ 106 PBMCs, P = .0294; (C) unstimulated IL-10-secreting cells/106 PBMCs, P = .0015 rapid-progressor children than in those who were longterm asymptomatic. It is therefore possible that these cytokines may interact differently with HIV in children and adults. It should also be borne in mind that enumeration of cytokine-secreting cells following in vitro mitogen stimulation of isolated PBMCs is unlikely to compare directly with cytokine quantitation in plasma. Our novel finding that Con A-induced IL-4 was negatively correlated with viral load is in line with its ability to inhibit phorbol ester-stimulated HIV-1 expression in chronically infected promonocytic U1 cells. [33] The effect of IL-4 on HIV in culture merits further study. We did not observe changes over time that suggested that type 2 cytokine production was tending to predominate over type 1 cytokines, as was described in some [13] [14] [15] [16] but not all [17] reports. Instead we observed that the presumably desirable increase in numbers of both type 1 (IFN-gamma, IL-2, and IL-12) and type 2 (IL-4) ELIS-POTs/10 6 PBMCs during the first 34 years of treatment with HAART was not maintained beyond this time ( Figure 3 ). It is not known whether a reducing trend of this nature presages failing immune protection or, more hopefully, lessening of HIV-1-induced immune hyperactivation. Continued observation of this small cohort of patients should allow us to determine whether these changes in cytokine production are related to the eventual clinical outcome. The ELISPOT assay used in this investigation has been optimized for reproducibility and sensitivity. It does not require specialized equipment and is relatively easy to perform and inexpensive (approximately US$32 per patient for 7 cytokines and the different activating conditions). We have previously used this system to investigate in vitro cytokine production in a number of clinical situations. [22] [23] [24] [25] [26] The assay performed favorably when data from groups of patients were pooled for statistical comparison, but there was wide variation in values for differ-ent subjects and day-to-day variability due to factors such as subclinical illness, mild tissue injury, and possibly variable stress levels. It was not ethically feasible to have either a healthy matched pediatric control group or an untreated pediatric HIV control group in the present study, so we were limited to a comparison of cytokine profiles in individual patients at times of relatively good and poor health and of improving or worsening CD4+ cell counts or viral loads. We were unable to identify cytokine profiles that were associated with or predictive of HIVrelated clinical events. Cytokine profiling using mitogenstimulated ELISPOT assays is therefore unlikely to be an important clinical measure that could influence or improve the accuracy of patient management decisions. Brian M. Jones, PhD, has disclosed no relevant financial relationships. Susan S.S. Chiu, MD, has disclosed no relevant financial relationships. Wilfred H.S. Wong, MMedSci, has disclosed no relevant financial relationships. Wilina W.L. Lim, MD, has disclosed no relevant financial relationships. Yu-lung Lau, MD, has disclosed no relevant financial relationships. Numbers of IL-6, IL-10, and TNF-alpha-secreting cells in 12 pediatric patients treated with HAART Figure 11 Numbers of IL-6, IL-10, and TNF-alpha-secreting cells in 12 pediatric patients treated with HAART. Cytokinesecreting cells tended to remain stable over the study period. Curves were fitted by nonlinear regression.
Orthopoxviruses (OPV) spill over from animal reservoirs to accidental hosts, causing, in some cases, human infections. OPV, in particular Cowpox virus, is widely distributed in Europe where an increasing number of cases have been reported in the last decade (Chantrey et al., 1999; Hoffmann et al., 2015) . Cases have been described in many animals, including mongooses, jaguarondis and different types of rodents used as pets (Campe et al., 2009; Kurth et al., 2009; Ninove et al., 2009; Vogel et al., 2012) . After exposure to affected animals, OPV may cause self-limited infections in humans, characterized by skin pustular lesions. Rarely, patients may report fever, malaise, lethargy, vomiting, eye complaints and sore throat, which usually last 3-10 days. Occasionally, human infections may be severe, in immunocompromised and eczematous patients, particularly children (Medscape, 2017) . In Italy, few events involving OPV were previously reported, and human cases have been reported after exposure to infected llamas and cats (Cardeti et al., 2011; Carletti et al., 2009; Scagliarini et al., 2016) . In January 2015, an outbreak due to an OPV, probably part of a novel clade lying between Cowpox and Ectromelia viruses, occurred in a colony of Macaca tonkeana in a private nature reserve in the province of Rieti, Lazio Region (Cardeti et al., 2017) . From the name of the nature reserve, this virus was called OPV Abatino. We describe the actions undertaken for the surveillance of exposed workers that led to the identification of a human asymptomatic infection. The epidemiological investigation, performed by staff members of the "L. Spallanzani" National Institute for Infectious Diseases (INMI; VP, FMF), was conducted through a site visit, and demographical and clinical data were collected by standardized structured interviews conducted on site. Viral DNA was extracted from biological samples using QIAsymphony technology (QIAGEN, Hilden, Germany) according to the manufacturer's instructions. The detection of OPV DNA was performed using a SYBR Green I-based real-time PCR, targeting the crmB gene (Carletti et al., 2005) . The differential diagnosis was performed using the Respiratory Panel for FilmArray multiplex PCR system (bioMérieux, Marcy-l'Étoile, France) according to the manufacturer's instructions. Sequential serum samples and peripheral blood mononuclear cells (PBMC) from the personnel involved in macaques farming were collected and stored at −20°C and in liquid nitrogen, respectively, until use. The antibody titres were determined by both plaque-reduction neutralization test (PRNT) and Indirect Immunofluorescence Assay (IFA). PRNT was performed according to previously published methods (Cutchins, Warren, & Jones, 1960; Newman et al., 2003) , against both LV and the OPV isolate causative agent of the outbreak (OPV Abatino). A control serum from a previously (4 years) vaccinated subject was used. Slides for IFA were prepared using Vero E6 cells infected with LV and OPV Abatino, IgG and IgM were detected using standard procedures (Carletti et al., 2009) The frequency of vaccinia virus-specific T cells was analysed by Elispot assay according to a previous report (Gioia et al., 2005) . Specifically, PBMC from patients were thawed, counted by Scepter counter (Millipore) and seeded at 3x10 5 cells/well in RPMI-1640 medium (Sigma-Aldrich, St. Louis, MS, USA) supplemented with 10% pre-tested heat inactivated FCS (Euroclone, Italy). Cells were then stimulated with LV at MOI 10 for 20 hr, and the immunological competence was evaluated by IFNγ enzyme-linked immunospot assay (ELISpot assay, AID Diagnostika, Germany). Leucocytes from healthy donors were used as internal positive controls. In January 2015, an outbreak due to an OPV, probably part of a novel clade lying between Cowpox and Ectromelia viruses, occurred in a colony of Macaca tonkeana in a private nature reserve in the province of Rieti, Lazio Region. Details about diagnostic procedures and phylogenic characterization are described elsewhere (Cardeti et al., 2017) . After the identification of the outbreak among monkeys, an epidemiological investigation targeted to staff members working within the wildlife sanctuary was performed. Demographic and epidemiological data are summarized in the Table 1 . The staff of the nature reserve was composed of 11 persons, including the owner of the reserve and his wife (ID1 and 11 in Table 1 ). Other persons working in the sanctuary were a veterinary (ID2), 5 researchers (ID4, 5, 6, 8 and 9) and three persons working as maintenance and cleaning staff (ID3, 7 and 10). During work days, personnel share common areas for briefing, as well as eating and relaxing. All staff members underwent a structured interview about pre-existing medical conditions, level of exposure to affected monkeys, and the presence of symptoms. Four persons from the staff (the owner, his wife, and maintenance and cleaning staff born in Albania) had been vaccinated against smallpox. From the collected data, four persons (ID1, 2, 4 and 6) reported direct contact with the affected monkeys, occurred at the beginning of the outbreak. In particular, the veterinary (ID2) performed an intubation on the first affected monkey, with no facial or respiratory protection. The other exposed persons were in contact with sick monkeys and removed the first two dead bodies with no specific Personal Protective Equipment (PPE). The remaining staff members did not report contact with affected monkeys. After the death of the second monkey, the personnel who had contact with monkeys started to use consistently PPE, selected according to activities: in case of contacts without direct exposure (observation, food provision, cleaning of the cage) personnel wore disposable gloves and apron; while for direct contact (such as the removal of dead bodies), the following disposable PPE was worn: complete tyvek suit with FFP2 mask, face shield, gloves, apron. In all cases, reusable boots were always disinfected after use with a sodium hypochlorite solution. Moreover, the cage was provided with dedicated equipment; the researchers already exposed (mainly ID4 and 6) were exclusively dedicated to the affected colony, with no contacts with other animals; they also took charge of cleaning and maintenance activities. Two staff members reported symptoms (Table 1) In consideration of the good clinical conditions, the patient was discharged after 2 days. The researcher (ID 4) directly exposed to affected monkeys also developed mild fever and sore throat. Symptoms appeared 12 days after the last unprotected exposure. She refused hospital admission, and the symptoms spontaneously disappeared in a few days. All personnel underwent serological surveillance, with serum specimens taken on January 22 and after 6 weeks; IgG and IgM against OPV were investigated. Persons vaccinated for smallpox showed comparable and stable IgG IFA titres against both LV and OPV Abatino in a 6-week interval, and no IgM response. All but one person not vaccinated showed the absence of serological response to both LV and OPV Abatino; one not vaccinated researcher directly exposed to diseased monkeys (ID6) showed a serological response to both VL and OPV Abatino along a 13-week observation, with IgM seroconversion and progressive increase in IgG, suggesting a recent exposure to OPV. The response against OPV Abatino was constantly higher (2-fold) than that against LV, indicating OPV Abatino as the actual trigger of the immune response (Table 1) . However, as observed by Silva-Fernandes et al. (Silva-Fernandes et al., 2009) in describing outbreaks, Orthopoxvirus infections seems to induce a limited IgM response in exposed subjects. In addition, the low IgM titres detected could be due to the poor sensitivity of the IFA test adopted. PRNT against both VL and OPV Abatino was performed to confirm the specificity of humoral immune response of ID6. As shown in Figure 1 , from week 2 onward neutralizing antibodies developed (peaking at week 6), against both LV and OPV Abatino. However, the titre of neutralizing antibodies against the outbreak isolate was 10-fold higher than against LV consistently with IFA antibodies. Moreover, to evaluate the poxvirus-specific T-cell response, PBMC were stimulated with vaccinia virus as previously described (Gioia et al., 2005) and analysed by Elispot assay. A marked increase of specific T-cell response, from 25 spot forming cells (SFC)/10 6 PBMC to 89 SCF/10 6 PBMC at week 6, was observed in the same researcher (ID6) showing serological response, supporting the hypothesis of a recent asymptomatic infection with OPV. All other staff members showed no sign of specific T-cell response, consistent with lack of serological response. OPV spill over is increasing and, despite it is described as an extremely rare zoonotic disease among humans, many reports F I G U R E 1 Humoral immune response. Serum samples collected at different time points (weeks 2, 6 and 13) were tested using Plaque-Reduction Neutralization Test (PRNT). Each serum sample was tested with either the Orthopoxviruses (OPV) Abatino (circle) or the LV reference vaccinia strain (triangle among animals, with or without involvement of humans, have been recently described (Campe et al., 2009; Kurth et al., 2009; Ninove et al., 2009; Vogel et al., 2012) . Monkeys belonging to Macaca genus have been already involved in a cowpox outbreak, in the Netherlands (Martina et al., 2006) . Data about lethality among Macaca were not reported in that study; thus, it is not possible to compare the two outbreaks. In the Italian outbreak among Macaca tonkeana, the clinical attack rate among animals was 89%, and lethality 67% (Cardeti et al., 2017) . Reports of human cases are increasing in recent years, also. This The close contact among wild, captive and domestic animals, and humans, together with decreased immunity against OPV in the community, underscore the need for physicians and veterinarians and animal handlers to become aware of the risk for OPV zoonoses. This work was supported by the Ministero della Salute, Italia-Ricerca Corrente, Istituti di Ricovero e Cura a Carattere Scientifico and by the European funded Joint Action (JA) CHAFEA No 677066 (EMERGE). No formal ethics approval is required in this particular case as it is a descriptive study; all procedures described followed the normal good standard of care and have not been used experimental/innovative treatments and/or approaches; patients have been anonymized; patients have given their written consent for the publication of their clinical histories. Written informed consent is available in case of request. All authors declare that no competing interests exist. F. M. Fusco http://orcid.org/0000-0002-1293-5976
Peripheral neuropathy (PN) was not linked to severe COVID-19. We report a previously healthy middle-age male who had life-threatening COVID-19 characterized by PN, acute respiratory distress syndrome, sepsis, and hyperinflammation, all resolved after plasma exchange. Plasma exchange is a safe adjunctive therapy in severe COVID-19 with neurological manifestations. Brain and peripheral nervous system pathologies were reported in the novel SARS-CoV-2 disease (COVID-19). [1] [2] [3] [4] [5] However, severe COVID-19 was not previously linked to peripheral neuropathy (PN). Neurological manifestations in COVID-19 may be partially attributed to the biochemical perturbations of sepsis, neuroinflammation, and cytokine release syndrome (CRS). 6 Severe COVID-19 is characterized by acute respiratory distress syndrome (ARDS), sepsis, thromboembolic disease, and CRS. [7] [8] [9] We briefly outline a patient with severe COVID-19 who presented with PN, ARDS, and CRS, all resolved after the administration of therapeutic plasma exchange (TPE) with artificial plasma. A 44-year-old previously healthy man was admitted to the emergency department (ED) on June, 2020, with 12 days of fever (38.3°C), persistent cough, anosmia, diarrhea, myalgias, and progressive bilateral lower limb weakness. The patient mentioned unprotected contact with his brother, who was infected with SARS-CoV-2, 1 week prior to the development of his symptoms. Neurological examination showed reduced power (3-out-of-5) in bilateral lower limb muscle groups, plus gait ataxia, and areflexia in bilateral knees and ankles. He had no sensory deficits and no other central nervous system or other neurological signs and symptoms. Emergency electromyography (EMG) of the lower limbs was performed, which revealed delayed latencies but normal conduction velocities (Table 1) . Unfortunately, upper limbs were not tested and no biopsy was performed, upon ED admission, as the patient was processed rapidly to investigate the COVID-19 status. However, these EMG findings were considered to be quasi-normal, and thus our differential diagnosis included peripheral neuropathy (demyelinating versus axonal) accordingly. [10] [11] [12] [13] Physical examination depicted bilateral crackles on the lung bases. The saturation of peripheral oxygen (SpO 2 ) was 78%, on room air, and the rest of vital signs were within normal limits. The patient was connected to a high-flow nasal cannula (flow: 60 L/min, fraction of inspired oxygen [FiO 2 ]: 40%) maintaining SpO 2 of 89%. Infection with SARS-CoV-2 was suspected due to the epidemiologic background and the clinical presentation. Baseline laboratory results showed lymphocytopenia (0.51 × 10⁹/L, normal: 1.1-3.2 × 10⁹/L) and increased inflammatory biomarkers defining CRS such as C-reactive protein (247 mg/L, normal: 0-5 mg/L), lactate dehydrogenase (1222 µ/L, normal: 100-190 µ/L), D-dimers (3.6 mcg/ mL, normal: 0-0.5 mcg/mL), ferritin (1123 ng/mL, normal: 23-336 ng/mL), and interleukin-6 (778 pg/mL, normal range 1-7 pg/mL). 9 Creatine kinase was slightly increased (616 µ/L, normal: 22-198 µ/L), but renal function and the rest of biochemistry report were within normal limits. Toxicology screen was negative. The coagulation profile was normal apart from increased D-dimers. However, low levels of ADAMTS 13 activity with antibody titers within normal limits (TECHNOZYM ® ELISA) were detected (ADAMTS 13 activity: 8%, normal >10% ADAMTS 13 IgG: 9 µ/L, normal: 6-12 µ/L). 14 These tests were performed as the thrombotic risk was considered to be high (D-dimers > 3 and possible COVID-19 status). [15] [16] [17] [18] [19] [20] Moreover, a full work-up for other systemic disorders (ie, autoimmune diseases and antiphospholipid antibodies) was performed accordingly. Although there were no central nervous system signs or symptoms, a lumbar puncture was performed by a consultant neurologist, and subsequent cerebrospinal fluid analysis revealed a normal cell count (2 × 10⁶/L, normal: 0-8 × 10⁶/L) and protein (12 mg/dL, normal: 8-43 mg/dL). SARS-CoV-2 infection was confirmed by RT-PCR assays (targeting for RdRp gene, E gene, and N gene of SARS-CoV-2), which were performed on nasopharyngeal swabs, using QuantiNova Probe RT-PCR kit (Qiagen) in a Light-Cycler 480 real-time PCR system (Roche) as previously described. 21, 22 Contrast chest computed tomography scans depicted peripheral bilateral ground-glass opacities and excluded pulmonary embolism ( Figure 1 ). Fourteen hours post-ED admission, the patient developed severe hypoxia (SpO 2 /FiO 2 ratio: 100) and septic shock; hence, he was intubated and transferred to the intensive care unit (ICU). We administered ARDS-net and prone positioning ventilation, and empiric therapy with lopinavir/ritonavir, ribavirin, interferon beta-1b, broad spectrum antibiotics, intravenous vasopressors and hydrocortisone, prophylactic anticoagulation, and supportive ICU care as per hospital protocol. 23 Echocardiography and cardiac enzymes were normal while lower limb sonography excluded deep vein thrombosis. However, the patient's clinical status was deteriorating; hence, we applied rescue TPE as detailed elsewhere. 9 Briefly, TPE was initiated using the Spectra Optia™ Apheresis System, which operates with acid-citrate dextrose anticoagulant as per Kidney Disease Improving Global Outcomes 2019 guidelines. 24 A dose of 1.5 plasma volumes was used for the first dose then one plasma volume daily. Plasma was replaced with artificial Octaplas LG ® (Octapharma AG), which is a fresh frozen pooled plasma product that underwent viral inactivation by prion reduction technology to minimize the possible risk of infectious agents' transmission. 25 Three daily (4-hour) sessions of TPE were applied without any complications (ie, allergy, coagulopathy, infections, cardiac, and renal system side-effects) recorded. After the completion of TPE sessions, SpO 2 /FiO 2 ratio exceeded 350 with gradual radiological improvement, and we were able to wean off vasopressors. Moreover, patient's inflammatory biomarkers (C-reactive protein, lactate dehydrogenase, ferritin, D-dimers, and interleukin-6 levels) along with the ADAMTS 13 activity/antibody levels were normalized with a parallel sustained increase of lymphocyte counts ( Table 2) . He was extubated on day-7 post-ICU admission. RT-PCR test and microbiology were negative on day-18 post-ICU admission. All the work-up for other systemic (ie, autoimmune disorders) diseases and viral infections was negative. He was discharged to the neurology ward on day-20 post-ICU admission. On day-30, follow-up magnetic resonance imaging of head and spine, and motor nerve conduction studies of upper and lower limbs were normal, as was lower limb power and gait. Thereafter, the patient was discharged to home isolation. This case report, albeit its many limitations, is shared because the neurological manifestations in severe COVID-19 are yet to be fully understood. We cannot definitely attribute the putative PN to COVID-19, as a muscle biopsy and upper lower limb EMG were not performed upon ED admission. However, the lower limbs' EMG findings along with the neurological clinical picture were suggestive of PN. [10] [11] [12] [13] In a recent large retrospective study, PN was observed in less than 1% of COVID-19 patients. 26 We cannot exclude Guillain-Barré syndrome (GBS) from our differential diagnosis. GBS is an acute immune-mediated disease of peripheral nerves and nerve roots (polyradiculoneuropathy), which could be triggered by various infections. 27, 28 GBS was recently described in COVID-19 patients. 2, 29 The typical features of GBS integrate progressive, ascending, symmetrical flaccid limbs paralysis, areflexia, or hyporeflexia, with or without cranial nerve involvement, which can progress over the course of days to several weeks. Although our patient did not have typical manifestations, the possibility of a GBS variant cannot be omitted. Antiganglioside antibodies that are strongly associated with certain forms of GBS were not tested. 27, 28 COVID-19 appears to have neuroinvasive properties; however, the pathophysiology of immune-mediated peripheral neuropathy remains obscure. Molecular mimicry, which is an important mechanism in creating autoimmune disorders, may have a role in the development of COVID-19-associated GBS. 27, 28 COVID-19-related hyperinflammation and immune system dysregulation, which in turn could generate autoimmune processes, could be another potential mechanism. [6] [7] [8] [9] In this report, whether the application of TPE accounted for the neurological improvement of the patient is unclear; however, given the severity of our patient's clinical picture, we speculate that it might have helped, especially given its biochemical plausibility. TPE can remove interleukins-3, 6, 8, 10, interferon-gamma, tumor necrosis factor-alpha, and various immunoglobulins of the IgG class. 9, 30, 31 Moreover, the main rationale for applying TPE on COVID-19 is the suppression of the thromboinflammation and the amelioration of the ensuing microangiopathy. [15] [16] [17] [18] [19] [20] In our patient, we observed normalization of inflammatory biomarkers and sustained increase in lymphocyte counts and ADAMTS 13 activity after three sessions of TPE. Elevated levels of inflammatory biomarkers and lymphocytopenia are predictors of severe COVID-19 and death while low levels of ADAMTS 13 were correlated to poor prognosis in patients with sepsis and multi-organ failure. [7] [8] [9] 32 Notably, extracorporeal blood purification therapies were previously used in severe sepsis and in COVID-19 with associated CRS. 9, 30, 31 Also, no specific antiviral therapies for COVID-19 were approved for widespread use thus far. We used TPE with artificial plasma replacement for the first time, to our knowledge, to treat severe COVID-19 with neurological manifestations and associated CRS. Also, we documented a thromboinflammation profile in severe COVID-19, which exhibited similarities to sepsis and other systemic thrombotic microangiopathies, although no severe coagulopathy was evident in our patient. [33] [34] [35] The hyperinflammation response and microthrombosis in COVID-19 result in multi-system organ failure with fatal outcomes. 15-20 SARS-CoV-2 has a versatile organotropism for extra-pulmonary targets; moreover, it can bind to the ACE-2 receptor facilitating endothelial injury and thromboinflammation, on the grounds of dysregulated renin-angiotensin-aldosterone and immune systems' responses. 36 This case report has limitations, which prevent its generalizability. Apart from TPE, the patient received empiric therapies and ICU supportive care. We are uncertain of their effects on inflammatory mediators or the extent to which these therapies affected survival or improved the neurological manifestations. The effect of TPE upon SARS-CoV-2 shedding is also unclear. Moreover, the natural course of SARS-CoV-2 viremia is still undetermined as reinfections and/or recurrently positive RT-PCR results were described. [37] [38] [39] Hence, the optimal TPE regime remains to be further clarified in future studies. Nevertheless, we are encouraged that prompt initiation of TPE was associated with resolution of CRS and amelioration of the clinical picture in our patient with severe COVID-19. 9, 40 Previous studies advocated that, at this stage of the disease, mitigating the dysregulated immune and inflammatory response could be more important than targeting the virus per se. 41 Also, unlike several immunomodulatory therapies, there is minimal immunosuppression associated with plasma exchange. However, the cost and resources of TPE should be carefully evaluated when the latter is employed in the management of COVID-19. Hence, we are suggesting its use only as an adjunctive rescue therapy in patients with life-threatening disease. 9 The correlation between severe brain pathology and COVID-19 was previously established. However, the putative link of peripheral neuropathies including Guillain-Barré syndrome to life-threatening COVID-19 needs to be further elucidated. Plasma exchange is a safe rescue therapy in life-threatening COVID-19 with associated neurological manifestations and thromboinflammation.
Background 2 Contact patterns are the drivers of close-contacts infections, such as COVID-19. In an 3 effort to control COVID-19 transmission in the UK, schools were closed on 23 March 4 2020. With social distancing in place, Primary Schools were partially re-opened on 1 5 June 2020, with plans to fully re-open in September 2020. The impact of social 6 distancing and risk mitigation measures on children's contact patterns is not known. 7 We conducted a structured expert elicitation of a sample of Primary Headteachers to 9 quantify contact patterns within schools in pre-COVID-19 times and how these patterns 10 were expected to change upon re-opening. Point estimates with uncertainty were 11 determined by a formal performance-based algorithm. Additionally, we surveyed school 12 Headteachers about risk mitigation strategies and their anticipated effectiveness. 13 Expert elicitation provides estimates of contact patterns that are consistent with 15 contact surveys. We report mean number of contacts per day for four cohorts within 16 schools along with a range at 90% confidence for the variations of contacts among 17 individuals. Prior to lockdown, we estimate that, mean numbers per day, younger 18 children ( other staff. Contacts between teaching and non-teaching staff reduced by 80%, which 24 is consistent with other independent estimates. The distributions of contacts per 25 person are asymmetric indicating a heavy tail of individuals with high contact numbers. Conclusions 28 We interpret the reduction in children's contacts as a consequence of efforts to reduce 29 mixing with interventions such as forming groups of children (bubbles) who are 30 organized to learn together to limit contacts. Distributions of contacts for children and 31 adults can be used to inform COVID-19 transmission modelling. Our findings suggest 32 that while official DfE guidelines form the basis for risk mitigation in schools, individual 33 schools have adopted their own bespoke strategies, often going beyond the guidelines. of Primary school children in England before the summer holidays. There is now an 45 expectation that schools will fully re-open in September. 46 The partial re-opening of Primary schools has been widely debated with concerns from 47 some parents, teaching unions and teacher associations about the safety of the children 48 and school staff. There was also concern about the effect on infection rate in the wider 49 community, for example by triggering a second wave. Some schools re-opened on 1 st 50 June, some have delayed their re-start while, in other cases, schools have not re-51 opened under advice of local education authorities. Data from DfE indicates on 15 th July 52 that about 88% of state-funded Primary Schools have re-opened to some extent. The 53 community response has been variable and between 1 st and 15 th June 2020 54 approximately one third of eligible children returned. The numbers have increased 55 somewhat and were 41% (year 1) and 49% (year 6) on 2 nd July. Between 18 th May and 56 31 st July 2020 there have been 247 COVID-19 related incidents in schools of which 116 57 were tested to be positive test (PHE 2020) . Primary Schools as experts. SEJ is a well-established approach to quantifying 65 parameters and their attendant uncertainties where there are no data or the data are 66 sparse or of poor quality or are highly empirical in character or have large associated 67 uncertainties (Cooke 1991; Colson & Cooke 2017) . This is the case for many 68 epidemiological parameters, including contact patterns which are used in combination 69 with the probability of transmission and the infectious period to describe the 70 reproduction number in a population. SEJ has been widely applied to risk assessment 71 and uncertainty analysis in many areas of science, engineering, the environment, 72 business and public health (Colson & Cooke 2017 were made as a consequence of these reviews. Normally experts are convened at least 111 in one or more face-to-face meetings in an SEJ. A meeting usually covers: introduction 112 to the methodology; calibration of the experts using seed questions; presentation and 113 discussion of the questions; and a time for the experts to answer the questions. A 114 meeting might last 1 or 2 days and with plenty of time for discussion on the evidence 115 that informs responses to the questions. After the data have been processed there then 116 an opportunity to discuss the results and there may be opportunities to re-elicit some 117 questions. 118 During national lockdown and with experts distributed in Primary Schools across 119 England the normal procedure was not possible. A briefing session was held by zoom 120 and most of the experts participated. The briefing session was recorded enabling those 121 who could not attend the chance to hear the proceedings. For both the seed and 122 elicitation questions Aspinall and Sparks were available to respond to queries and 123 clarifications by email. To compensate for the lack of a meeting six experts were chosen 124 for structured interviews (Longhurst, 2015) . The question protocol ensured participants 125 described the thinking behind their quantitative answers but also allowed a free 126 exploration of topics the headteachers perceived of relevance to adult and child 127 contacts The experts 187 The elicitation produced 26 complete responses from experts who had undergone 188 Classical Model (CM) calibration process. Thus, each had a personal statistical accuracy 189 score and an information score (Table 1) . These scores are the basis for ascribing a 190 relative performance weight to each participant for pooling judgements in the context 191 of enumerating uncertainty assessments for specific target/query items. Classical Model 192 non-optimised ltem weights combination solutions (i.e. all teachers given some weight 193 based on calibration scores) are listed in Table 1 1 . These weights are not the same as 194 ascribing equal weights to all participants. When combining judgements collectively, 195 each person's uncertainty distribution is weighted according as their own performance 196 score, so each contributes with some real positive weight to the overall outcome. 197 The calibration "p-value" score is calculated on the basis of a Chi-squared test on the 198 deviations of the expert's judgements over the seed questions, relative to the known 199 values used for calibration. Using Shannon's relative information statistic, the Chi-200 squared test takes the frequencies with which the expert's assessed calibration variable 201 values fall within various ranges, and compares these with the counts of actual (known) 202 item values in the same ranges; this produces relative probabilities of match per item, 203 which can be summed over all calibration items to form a measure of the expert's 204 statistical accuracy. When combined with the expert's information metric (see Cooke, 205 1991) in a product, this p-value provides the 'statistical accuracy' part of the expert's 206 performance score. In effect, a calibration p-value can be thought equivalent to the 207 probability that an expert's performance would be regarded as statistically accurate by 208 chance but, in the Classical Model, it is not used in the sense of an hypothesis test. 209 Rather, it is the formal mathematical basis for enumerating the metric for the statistical 210 accuracy of the expert's assessments and is used, with a companion information metric, 211 for scoring performance in assessing uncertainties. A high p-value indicates a close 212 correspondence between the expert's assessment values and the known values (a 213 perfect score would be p = 1); a very low p-value signals major deviations exist between 214 the assessed and actual values. The calibration metric is a 'fast' function and typically 215 changes markedly between experts in an elicitation . 216 To provide some context for the teachers' panel revealed performance scores, the 217 calibration and weighting profile of the group is very similar to that of other 218 professional expert elicitations; for example, in Figure 1 a profile is shown for a medical 219 panel with the same number of participants (n.b. some points overlap). While the 220 overall range of the teachers' statistical accuracy p-values -from highest to lowest -is 221 not atypical, in this case there are fewer individuals with very low p-values (i.e. p < 10 -6 ) 222 than in the medical panel; the latter represents the more usual case. With the 223 exception of one teacher (a very low p-value outlier not shown here), the majority of 224 the teachers' p-values are more clustered at higher p-values than those obtained for 225 the medical panel. On the other hand, the relative information scores of the teachers 226 are generally lower than those of the medical experts; this may reflect inherent 227 differences in the natures of the precision of the data types each group were 228 considering. 229 Our conclusion is that the teachers' elicitation does not exhibit any substantive 230 shortcomings compared to other cases, notwithstanding it was conducted with minimal 231 briefing and without the benefit of a plenary workshop to help focus judgements. In 232 short, the Head teachers' performances proved to be as strong, collectively, as those of 233 many other groups of experts (Colson and Cooke 2017, for other case profiles). 234 The combination Decision Maker (DM) --the combination of all experts --is statistically 235 more accurate than any individual expert. The range of relative information metric 236 scores is also typical; this is a 'slow' function that does not vary greatly from one expert 237 to another. Higher values indicate experts who provided tighter (more informative) 238 uncertainty ranges. However, there is here, and usually, an inverse relationship 239 between informativeness and statistical accuracy: experts who are too narrow or too 240 precise with their uncertainty judgements tend to 'miss the target' too frequently and 241 their performance scores are penalised as a consequence. On the other hand, the goal 242 is to identify those experts who are both informative and statistically accurate. This is 243 usually a minority of a group, and such persons are not identifiable a priori on common 244 grounds such as professional standing; the best uncertainty assessors are discovered 245 only after calibration. The DM has a low information score compared to individual 246 experts, and this is the price paid for the DM's superior statistical accuracy. As noted 247 above, it is well-established that individual experts tend to under-estimate true variable 248 uncertainties (Lin & Bier 2008) , and pooling via the DM redresses this trait. The 249 combination DM out-performs the best single expert and provides a set of target item 250 solutions that is superior to any single expert. 251 Individual expert's relative performance scores (column 5 in Table 1) are computed 252 from the products of their p-value and information scores per item, and are un-253 normalised. When the DM is included as a synthetic expert, performance scores are 254 adjusted and normalised to sum to unity across the group (with DM included). These 255 are the relative weights (column 6 in Table 1 ) that are used in the Classical Model for 256 combining judgements on target items. 257 Two experts (E09 & E10) jointly achieve the best statistical accuracy scores (0.182); 258 their judgment influences are differentiated in the analysis by their different 259 information scores -one is more informative than the other and is consequently 260 rewarded with a marginally higher overall weight than the other (see two rightmost 261 black points on Figure 1 ). 262 Prior to the main elicitation and before 1 st June we asked the experts to forecast the 263 proportion of returning pupils and teachers on two dates (1 st and 15 th June). When 264 completing these forecasts some teachers indicated that their estimates and ranges 265 were based on surveys of parents before 1 June, conducted for planning purposes. As 266 such, these percentages did not represent personal judgements, sensu 267 stricto. However, absent comments from other teachers, it was not possible to know 268 how general pre-return surveys were, or how many respondents had provided 269 percentages data based on similar surveys. Thus, for the purposes and goal of our 270 elicitation exercise, we chose to regard all the inputs on the percentages of pupils 271 returning as representing objective, informed judgements and treated them uniformly 272 when processing the entire group's responses. 273 The forecasts were received before or by 27 th May 2020. There was a wide range of 274 responses ( Figure 2 and Table 2 ), indicating a remarkable diverse set of circumstances 275 in individual schools and community enthusiasm or lack of enthusiasm for a return to 276 school. This was borne out by the expert interviews where the actual return had been 277 highly variable across just six schools driven both by community perception and 278 school's mitigation measures. However, when averaged over all of the teachers, the 279 median forecasts are very close (indeed, for pupil's attendance, identical) to the 280 national attendance on 1 st June ( Table 2 ). The national attendance on 15 th June was 281 similar to 1 st June but had increased by 2 nd July to levels similar to those anticipated by 282 the teachers for 15 th June. These results demonstrate the ability of the teachers, as 283 experts, to make good forecasts and strengthen the belief that the study schools are a 284 dependable representative sample of primary schools in England. 285 The six structured interviews strengthened the assessment of the teachers as 286 thoughtful and careful in thinking through the responses to the questions even thought 287 they were regarded as challenging. All thought about variations between different 288 children, differences in the school day, differences between class time and breaks, and 289 in roles and interactions of staff. Two of the interviewees had consulted with other staff 290 to help them think about contacts. 291 292 Here we present and interpret the quantitative responses from the elicitation. Table 3 294 itemises the questions for which quantitative answers were requested. The questions 295 largely focus on contacts between persons in schools as epidemiological models use 296 contact data as a basis for modelling transmission of infection (Danon et al. 2013 ). Essentially the greater the number of contacts and the longer the duration of those 298 contacts the greater the chance of infection transmission. 299 Table 4 list the results, noting that triplets of quantiles for the elicitation of a single 300 central are variance spreads on the single values, rather than the usual elicited 301 uncertainty ranges per expert. In several questions the experts are asked a pair of 302 questions to estimate contact numbers for an individual (Q2a, Q3a, Q5a,Q6a) and then 303 to requested to estimate the range of contacts considering the variations between 304 individuals (Q2b, Q3b, Q5b, Q6b). Figure 3 shows examples of responses to some 305 questions to illustrate variation in responses among the experts. Figure 4 shows the 306 range graphs for the DM contact distributions, while Figure 5 shows a typical 307 distribution for an individual child. The amalgamated DM distributions are 308 characteristically heavy tailed. Figure 6 plots the 5 th , mean and 95 th values to compare 309 normal (pre-COVID) and new normal (COVID) times. 310 The interviews conducted suggested the teachers were largely thinking about how 311 contacts might vary from day to day, with this being much harder to estimate during 312 normal times due to the diversity of activities pursued by adults and children. For the 313 second set of questions the variation between experts was also dependent on the 314 make-up of the school; for example some schools had special-learning units which 315 meant some children in the cohort spent only part of the day engaged in mainstream 316 learning. 317 Before presenting the responses to specific questions we consider sources of 318 uncertainty in the results with a focus on the contact data. Some questions concern the 319 contacts of individuals while others concern variations among groups of individuals. 320 Each teacher has been asked effectively to make measurements of contacts. The 321 definition of a contact (conversation at 1 metre for five minutes or more) is challenging 322 and quite large uncertainty is implicit in making the measurement. The large range for 323 most experts in the graphs for individuals (Figure 3 ) represents the accuracy of the 324 measurement. In the interviews teachers also placed different emphasis on thinking 325 this through; in some instances 5 minutes of contact was considered a long time for 326 some children during the course of play-led learning. 327 In comparing schools and combining experts into a DM two additional sources of 328 uncertainty arise. First there are likely to be real differences between schools because 329 of variations in bubble sizes and school characteristics. In all of the interviews the 330 teachers attested to the strict way in which the bubbles were observed, and thus 331 influenced their thinking around contacts. Second there is a calibration issue with each 332 expert working out how to make the measurement. These uncertainties are manifest in 333 the wide variation of elicited values observed in Figure 3 . In normal circumstances face-334 to-face discussions between experts might have been able to reduce the calibration 335 effect. In creating the DM we are combining measurement accuracy at schools, real 336 variations in contacts between schools and significantly different calibrations. 337 The DM distribution aggregates all these uncertainties and results in markedly skewed 338 distributions (Figure 4a ). Here we take the ratio of the right to the left tail, commonly 339 termed eccentricity, as an approximate measure of skewness. are organized to learn together to limit contacts with many children. The DFE guidelines 364 issued during the study period were for schools to form bubbles of up to 15 children. 365 The responses indicated bubbles between 6 and 15 with a mean of 11 (Cohort 1) and 13 366 (Cohort 2). The response to the question on spacing between bubbles indicated that 367 each bubble was in a separate room and so these results are not reported as they are 368 deemed not relevant to the risk of transmission between children in different bubbles 369 during class time. Teacher interviews substantiated this view, with teacher-teacher 370 contacts between bubbles being substantially restricted. The interviewees cited this 371 change of behaviour as one of the most significant in terms of reduction in contacts for 372 teachers. 373 We first consider results for contacts of children in normal pre-COVID times (Q2, Q5). of high contact individuals. These studies do not report 5 th and 95 th percentiles on the 399 contact distributions. The teacher elicited data for pre-COVID times (Q2c and Q5c) are 400 qualitatively consistent with previous studies. 401 The contacts between children in new normal time under risk mitigation regimes are 402 substantially reduced ( Figure 6 ): comparing mean contact numbers are reduced by 53% 403 for Cohort 1 (Q3a) and 62% for Cohort 2 (Q6a). The latter reduction is slightly less than 404 the reduction of 74% for adults found by Jarvis et al. (2020) . The somewhat greater 405 reduction in contacts in Cohort 2 compared to Cohort 1 in new normal times, however, 406 indicates that older children are slightly easier to manage with respect to social 407 distancing. This interpretation was supported by interviewees. Some thought that older 408 children were more capable of more sustained (> 5m) social distancing (talking face-to- The formation of bubbles, however, is unlikely to be the only factor in the decrease in 420 contacts. Children will naturally form smaller groups in schools due to either 421 juxtaposition in class or formation of friendship groups that are much smaller than a 422 classroom (Conlan et al 2011). Thus most contacts will be restricted to a fraction of a 423 class in normal times, while in bubbles the size of friendship groups will become more 424 comparable. The interviews indicated, however, that reception classes in particular 425 were more free flow with gregarious children interacting with sibling groups in other 426 years as well as their peer group within bubbles. These observations help explain the 427 significantly smaller reduction in contacts in Cohort 1. 428 We asked questions about differences between different cohorts (Q4 and Q7) In new normal times mean daily contacts for Cohort 3 (Q9c) decrease from 25 to 10, a 459 60% reduction ( Figure 6 ). The reduction in contacts compared to normal times is a 460 factor of 2.5, compared to between 2.1 (Cohort 1) and 1.5 (Cohort 2) for children. Note 461 that the mean of 10 is similar to the bubble size of 11-13 (Q1b). The contacts between 462 adults between normal (Q13) and new normal (Q15) reduces by 80% ( Figure 6 ). This 463 result is comparable to a 74% reduction in adults found by with Jarvis et al (2020). These results indicate that teachers are social distancing to a greater extent than 465 children, limiting close contacts as far as is feasible. Answers to Q9b indicate variations 466 in the roles of different teachers and effects of tasks like supervising breaks and meal 467 times. There is still a heavy tail to the responses but this is not as great as for other 468 responses. In the interviews the teachers perceived that those facing older children 469 were strongly socially-distanced but those facing younger children were less able to 470 strictly observe this. 471 For Cohort 4 the changes between normal (Q10) and new normal times is similar to 472 Cohort 3, but in general contacts are fewer (by ~ 30%) and reflect different roles of 473 ancillary staff some of which involved much less interactions with children (e.g. 474 administrative staff). Responses to Q11 indicate a 64% reduction of contacts in new 475 normal time (Figure 6 ), reflecting the deliberate policy of limiting ancillary staff contacts 476 with children and the efforts of these staff to observe social distancing. Adult to adult 477 contacts are reduced by 80% between normal and new normal time (Figure 6 ). At 478 interview Cohort 4 were uniformly perceived as adults with the greatest change to their 479 day-to-day contacts in the working day, with some able to be entirely socially distant. 480 This is similar to the overall reduction in contacts of 20% in the general community We asked about adherence and the response indicates that the experts considered 491 adherence to be high (Q16). This view is supported by the analysis of the qualitative 492 parts of the questionnaire related to risk mitigation measure. From the interviews the 493 string was driven often by thinking about consequences, for children, their families but 494 also their colleagues. Questions related to understanding risk reduction (Q20, Q21, 495 Q22) are discussed in the next section. 496 497 We asked the Head Teachers about risk mitigation measures that they have put in 499 place. Questions are listed in Table 4 . Reference is made for some topics to literature on survey question answers structured interviews were conducted to help in the 505 interpretation of the response. 506 Teachers have attempted to follow the Government guidelines, with respect to school 507 reopening. Links to guidelines from DfE can be found as footnotes. These guidelines 508 have been updated regularly so it is not easy to categorically state which guidelines 509 were followed when teachers completed their elicitations. It is apparent that in 510 instances where the guidelines are not defined, such as 'minimising contact and mixing 511 by altering, as much as possible…' teachers have typically been very cautious, putting in 512 measures that go beyond the guidelines. Equally, there are no explicit guidelines related 513 to the need to 'wash hands frequently' and consequently there is considerable variation 514 in interpretation. One interviewed teacher made the point that the younger children 515 are in the process of learning, and that learning needs to incorporate the making of 516 mistakes. Many schools had made a good job of initiating procedures for handwashing 517 that were age appropriate and encouraged sticking within the guidelines (e.g. no 518 singing). Teachers have interpreted the guidelines to suit their own settings and then 519 gone one step further to ensure the safety of their charges. Further comparisons to the 520 guidelines are included in each section below. The appendix reports all responses and 521 provides a record of good practice and innovation by the schools which will shared 522 between the schools. 523 Answers to Q1a concerning the strategy to reduce close contacts of children were 524 broadly similar for cohort 1 and 2 and have been combined for this summary. Four 525 responders did not answer this question. The Government guidelines recommend 526 smaller group sizes, with consistent students and teachers, kept 2 m apart where 527 possible with staggered break times. Outdoor activities are recommended where 528 possible and students are to stay in the same desks and rooms as much as possible. 76% 529 of responders noted social distancing measures that were being put in place with visual 530 indicators for children to follow. Over 90% of responders followed the guidelines and 531 indicated the use of classroom bubbles of varying sizes below the recommended 532 number of 15. The answer to Q1b (Table 7) indicated that many bubbles were about 10 533 children. Some have indicated that children will only be in school part time to allow 534 teachers to accommodate the reduced class sizes. 535 Other measures included the removal of furniture to allow for more space, rotation of 536 toys and/or removal of certain play items and lunches taking place in the classroom 537 (either packed lunches or 'take-away' style cartons to remove the need for cutlery). In 538 line with the guidelines, a third of the responses indicated that learning would be 539 moved outdoors as much as possible and a similar number noted the need for 540 individual desks and resources. Around a third of teachers have extrapolated the 541 guidelines and ensured that student bubbles are allocated their own areas of the 542 playground, their own toilets or their own lunchtime spaces and over 50% refer to the 543 suggested staggered break times, start times etc. This was also borne out at interview, 544 almost all interviewees used the word 'strict' to describe their bubble, and indicated 545 that adults avoided connecting between bubbles. 546 Unsurprisingly all responders in Q18 concerning cleaning have referred to changes to 547 the usual cleaning regimes in their school with over 70% making specific reference to 548 ongoing cleaning throughout the day, highlighting high touch areas such as computers 549 (3 responses) and toilets (13 responses); this is broadly in line with Government 550 recommendations 2 . In the main, deep cleaning is taking place before and/or after 551 school and reference is made to deep cleaning taking place after classroom bubbles 552 have used specific areas; Government guidelines recommend thorough cleaning at the 553 end of the day 3 ; 3 responders made specific reference to hard plastic toys being 554 sterilised in Milton after use following DfE guidelines that recommend that cleaning of 555 toys takes place between use 4 . Some teachers referred to the removal of soft toys and 556 furnishings in their answer to question 1a). Three responders have referred to their 557 employment of additional cleaning staff and three have indicated that staff and children 558 will, themselves, engage in cleaning processes. Several responders noted that classroom 559 doors will be left open, presumably to avoid the need to touch door handles as 560 recommended in the Government guidelines 5 . Some of the responses in the appendix 561 indicate that some schools have gone beyond the guidelines, such as: bubbles having 562 their own toilet cubicle and sink; cleaning staff using PPE which are double bagged and 563 stored for 72 hours before putting in bins; disinfecting laptops after every use. Allergies 564 to disinfectant in some of children hampered cleaning of some areas. 565 SARS-CoV-2 can persist on surfaces for hours to days (Eslami and Jalili 2020) and 566 disinfecting surfaces is recommended by major health authorities (WHO 2020). 567 Exposure via contaminated environmental surfaces appears to be a secondary vector of 568 transmission; transmission via airborne droplets is more important. There is very little 569 direct research on SARS-CoV-2 =so studies rely on tests performed on similar viruses, or 570 other viruses that are hard to kill. Surface disinfection to remove viruses include 571 Quaternary ammonium, ethyl alcohol, hydrogen peroxide and sodium hypochlorite 572 (Henwood 2020 accordingly. 39% had put in place one-way systems to reduce contact between 586 parent/children arriving and those leaving (at drop off and collection times). 52% noted 587 that parents were either not allowed on the school site or that only one parent could 588 drop the child off, with teachers meeting children at the school gates. 48% referred to 589 social distancing measures being put in place for parents waiting and/or for children in 590 the playground. Approximately 20% referred to children not being allowed in the 591 playground at the start of the school day and needing to go straight to class and a 592 further 20% referred to classes and bubbles having separate entrances and exits. 593 Accurate quantification of the risk reduction gain from such measures is difficult. A 594 scoping calculation, however, can give an indicative estimate. If the average parent has 595 a daily contact hours of 30 (Danon et al. 2013 ) then the mitigation measures described 596 might reduce the contact hours by 1 or 2 hours compared to normal mixing associated 597 with delivering children to and from school, so we estimate an overall effect of a few 598 percent (~3-6%), which can be compared to the 80% reduction associated with general 599 lockdown (Brooks Pollock et al. 2020). The contribution to risk reduction within schools 600 is likely to be very small, but larger and tangible in the wider community. 601 Responses to the question on weather (Q20) indicate that contacts were not changed 602 between indoors and outdoors. It is now widely thought that outdoors is much less 603 risky than indoors but this was not the question. Interviews indicated that the physical 604 layout of the school informed their responses, and some commented the influence of 605 very good weather during the course of the study. 606 In relation to handwashing (Q21) Government guidelines recommend that adults and 607 children should frequently wash their hands with soap and water for 20 seconds and 608 dry thoroughly 7 . Responses (Table 7 ) gave a range from 3 to 13, with over half opting 609 for a range between 3 and 10 handwashes. One teacher referred to the number of 610 handwashes being dependent on the number of visits to the toilet. Those teachers who 611 gave a specific lower number with a wider range have implemented a policy of 612 handwashing at certain times with variance added for toilet visits (one responder has 613 indicated scheduled handwash times). All responses are shown in the elicitation results. 614 There is limited evidence on the efficacy of handwashing in reducing transmission. Soap 615 is very effective at neutralising the virus (Eslami and Jalili, 2020). The chances of an 616 infectious individual passing the virus by touching a person or contaminating a surface 617 are reduced by regular hand washing. The weight of evidence is that handwashing 618 reduces transmission but may become counterproductive, less effective and even 619 harmful if the handwashing becomes excessive. Risk reduction is estimated as 6% to 620 44% for respiratory diseases (Rabie and Curtis, 2006) . Beale et al (2020) studied risk 621 reduction of handwashing for flu and common coronavirus infections in the UK. They 622 estimate significant reduction for between 6 and 10 handwashes per day (5 th per = 58%; 623 50 th per = 36%; 95 th per = 1%), results consistent with those of Rabie and Curtis, 2006) . A reservation for these studies is that they involve extrapolation to transmission of 625 SARS-CoV-2. This distribution is included in the risk modelling. The range in the elicited 626 answers indicate that most schools have adopted an optimal regime of hand washing 627 too reduce risk and have avoided excessive handwashing that are considered to be 628 potentially harmful. 629 The schools follow Government guidelines for those displaying COVID19 symptoms 630 (Q22), but some schools have gone beyond these recommendations. In general anyone 631 showing symptoms is self-isolated with some mentioning designated areas set aside for 632 this purpose. A few will use of PPE, which is recommended if the adult cannot keep 2 m 633 apart. The symptomatic persons are then sent home immediately and asked to get 634 tested. School areas are then cleaned following the guidelines 8 to use PPE. Two 635 responders plan to notify families of children in a bubble. One school proposed that 636 everyone in that bubble would be recommended to get a test and, if it comes back 637 positive, the whole bubble is required to self-isolate for 14 days. The individual stays at 638 home for 7 days or until a negative test is achieved. One responder has said that if a 639 positive test is confirmed the whole school will close. For case where infection is 640 suspected outside school hours, those that answered required the school to be 641 informed, a test will be undertaken, and the school notified of the result. A small 642 number mentioned that the person showing symptoms would not be able to attend 643 until a negative test was achieved. One responder stated that a positive test would 644 result in self-isolation of the whole bubble for 14 days. Only a few responded to the 645 policy for dealing with anyone with symptoms identified outside of school. Most 646 interviewees had experienced at least one instance of a suspected case that required 647 testing. While the protocols all required that the child stayed away from school, the 648 responses varied in terms of the notification or isolation of the child's bubble until test 649 result was received. The DfE guidance states that the bubbles only close if a positive 650 test is returned but one school will require immediate self-isolation until results come 651 back. 652 The survey indicates that all schools had strong measures in place following and going 653 beyond government guidelines to minimise risk. 654 Most responders to Q23 stating that the child cannot attend school and must self-655 isolate -Government guidelines are limited to just this piece of advice, with respect to 656 schools 9 . A small number will inform other families within the same bubble. One school 657 stated they had no policy for such events and 2 schools indicated that the child of said 658 adult could still come to school. One school submitted this guidance from a local health 659 protection team. 660 Other comments (Q24) are unique to individual schools and are all included in the 661 appendix below. Some noted issues of EAL and SEND students 10 and risk factors 662 involved with school transport. 663 The structured interviews augmented these findings from the risk mitigation survey, but 664 highlighted some additional issues. Interviewees mention the difficulties and stress for 665 staff in maintaining social distancing and risk mitigation measures. The tension between 666 social distancing and key educational objectives of learning and developing social skills 667 was highlighted. There was comment that some measures would be impossible or much 668 more difficult with a full return of school. 669 670 The circumstances in English primary schools in June and July 2020 are unprecedented 672 and unlikely to be repeated. The partial re-opening of schools was undertaken under 673 strict guidelines of social distancing and a range of risk mitigation methods to reduce 674 the transmission of COVID-19. These unique circumstances provide an opportunity to 675 evaluate the efficacy of different risk reduction strategies and to add to the data on 676 contact patterns for young children and staff in the school environment. 677 A striking aspect of our results is that the 34 participating schools proved to be 678 representative of schools nationwide. Thus as in opinion polls an accurate picture of 679 what is happening in nationally in schools can be gleaned from a modest sample size. 680 We were fortunate here that volunteerism led to a group of schools covering a wide 681 range of sizes and communities. Even though the schools individually predicted and 682 then reported a very wide range of returns for pupil and teacher attendance on 1 st June 683 the average for the 36 schools was very close to the national picture. The prescience of 684 experienced school leaders could be used to anticipate what will happen in September 685 when a full return to school has been mandated by Government. 686 Breaking social networks is a key non-pharmaceutical intervention for countering the 687 spread of infectious disease. Our study indicates that contacts within schools were 688 reduced in the range 45% to 80% ( Figure 5 ). These strikingly successful outcomes 689 highlight the tremendous work of school staff and the role of greatly reduced class sizes 690 through creation of bubbles of children which are much less than normal class sizes. 691 Although the marked reduction in contacts can be partly explained by much smaller achieved similar reductions in contacts. The data also confirm that older children 711 (Cohort 2) are easier to manage than younger children (Cohort 1) with more than a 712 two-fold greater reduction of contacts in the older children compared to the younger 713 ones. 714 The elicited data can be compared quantitively with other kinds of contact survey data 715 because of the variation in methodologies. The results indicate the same kind of 716 heterogeneity as documented in Danon et al. (2013) in which the data are a mixture of 717 individual contacts involving significant conversations at a distance of 3 m or less or 718 touching plus contacts related to groups where each person in a group counts as a 719 contact. Results were expressed as contacts per day and total contact hours, noting that 720 the latter parameter can exceed 24 hours because contacts within groups occur 721 simultaneously. In a school setting, classroom adults (Cohort 3) have a median of 26 722 contacts (Q8a), of which about 2/3's are with children (Q12) and 1/3 with adults (Q13). 723 The distance specified in the question is 1 m with social distancing at the time of 724 elicitation being 2 m. Contact hours are of the order of 2 hours. 725 In the context of a typical class of 30 the group definition of contacts in Danon et al. 726 (2013) seems less relevant. There must be an additional risk factor related to being in 727 an enclosed space with widespread aerosol circulation, but this is not quantifiable with 728 the elicitation data. However, reducing class sizes from 30 to roughly 10 reduces the 729 random risk of an infectious person being in the class-room by about 1/3. 730 All of the risk mitigation strategies contribute to reduction of risk. Specific risk 731 management strategies reflect individual circumstances in schools. Many relate to steps 732 which are likely to reduce contacts and disrupt break contact networks. Each of the risk 733 mitigation measures will contribute to risk reduction, but this is hard to quantify. 734 It seems unlikely that the significant reduction of risk, implied by these results, can be 735 maintained with a full return to school without greatly expanding the accommodation 736 to maintain reduced class sizes, as suggested by the factor of 2 to 3 reduction in 737 contacts between children. Adult staff can continue to observe strict social distancing 738 behaviours and can continue to organise the classroom and break times which reduces 739 contacts. 740 We are particularly appreciative of the Primary School leaders who volunteered to join 742 the expert panel. Their enthusiasm, support and expert knowledge was paramount. The 743 University of East Anglia fact finding team of Jade Eyles, James Christie, Nicola Taylor
Cystic Fibrosis (CF) is a genetic multisystemic progressive condition, caused by mutations of the Cystic Fibrosis Transmembrane Regulation (CFTR) gene. Pulmonary disease is the main cause of mortality in CF patients [1] . The progressive lung damage is marked by episodes of acute worsening of symptoms, called pulmonary exacerbations (PE), which are associated with disease progression and have an important impact on patients' quality of life [1, 2] . Bacterial infections have been historically acknowledged as the major cause of PE. However, with the popularization of molecular techniques for virus detection, several studies have demonstrated a large prevalence of virus in PE events [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] , showing significant association between virus detection and PE symptoms [5, 8, 9, 12, 15] . Despite this growing identification, little is still known about the clinical impact of respiratory virus infection in CF patients, with studies often presenting conflicting results. The aim of this study was to determine the prevalence of respiratory virus infections in children and adolescents with CF during PEs, and compare virus positive and negative groups in regards to clinical manifestations, severity of PE and bacterial colonization. CF patients were recruited from the CF service, in Fernandes Figueira National Institute for Women, Children and Adolescent Health at Rio de Janeiro/ Brazil, from January to December 2018. The inclusion criteria were age < 18 years; regular follow up at the CF center and presence of PE, based on the Fuchs Criteria [16] . Shortly, this criteria defines an exacerbation by the presence of at least four of the following signs and symptoms: change in sputum pattern; new or worsening hemoptysis; increase in cough; dyspnea; malaise or fatigue; fever; anorexia or weight loss; sinus pain; change in sinus discharge; change in physical examination of the chest; decrease in pulmonary function by 10% of the forced expiratory volume, and radiographic changes indicative of pulmonary infection. Patients were excluded if they had other chronic pulmonary, cardiovascular, neurologic, digestive, or rheumatologic disease not related to CF. Clinical data was collected from patients' charts. The Cystic Fibrosis Clinical Score (CFCS) [17] was applied to evaluate PE severity and the Shwachman-Kulczycki Score [18] to evaluate CF severity. When a chest X-ray was not performed at the moment, the last X-ray performed was used to apply this score. Nutritional evaluation was performed according to the Cystic Fibrosis Nutritional Guidelines [19] . Total nucleic acid was extracted from combined nasopharyngeal swabs using QIAamp 1 Viral RNA mini kit (QIAGEN, Hamburg, Germany), generating 80uL of purified nucleic acid in the final step. The Real Time Reverse Transcription Polymerase Chain Reaction (RT-PCR) for Rhinovirus (RV), Adenovirus (AdV), Respiratory Syncytial Virus (RSV), Influenza A and B, human metapneumovirus (hMpV) and Parainfluenza (PIV) 1, 2 and 3 detection was performed using Go Taq 1 Probe 1-Step RT-qPCR System kit (Promega, USA) in a ABI 7500 real time thermocycler. The cycling protocol was: 45˚C/30min, 95˚C/5min followed of 45 cycles of 95˚C/15s, 55˚C/30, with data collection in the last stage. All molecular biology procedures and analysis were performed in the Respiratory Virus and Measles Laboratory at IOC/FIOCRUZ. Real time results were made available to the CF Center's physicians and to the patients right after their analysis. Oropharyngeal swabs or sputum samples for bacterial culture were also collected at each visit following the CF Center's protocol, and its results were made available to the researches through patients' charts. This study was approved by both institutions' Research Ethics Committee (protocol n. 79277117.0.0000.5269) and informed consent was obtained from all participants and their parents/guardians. Statistical analysis was performed using the program R, version 3.5.2 1 (2018). Virus positive and negative groups were compared using chi-squared and Fisher's Exact Test for categorical variables, Students T test for normally-distributed numerical variables and Mann-Whitney's U Test for skewed data. Results were considered statistically significant when p<0.05. During the year of 2018, 183 patients were followed by the CF Center, totaling 706 appointments. Forty-eight of these patients presented PE during appointments, in 71 different occasions. One case was excluded because the patient had chronic hypoxic ischemic encephalopathy, leaving a total of 47 patients and 70 episodes of PE included in the study. All 70 swab samples were tested by real time RT-PCR, and 25 (35.7%) were positive, as shown in Table 1 . Two samples tested positive for more than one virus: one case of PIV 1 + RSV and one of PIV 2 + HMpV. Epidemiological characteristics of the patients included in the study are described in Table 2 . Clinical characteristics of virus positive and negative groups are shown in Table 3 . The number of X-ray exams ordered and the finding of new radiologic images suggesting infection did not differ between virus positive and negative groups. Bacterial colonization status previous to the PE events are described in Table 4 . The bacterial culture from sputum or oropharyngeal swabs of the moment of the PE were tested for all samples enrolled in the study, but two (Table 5) . A subgroup analysis was conducted with 37 samples from children <5 years. Fourteen samples (37.8%) were positive for virus detection, namely: six RSV, four RV, two hMpV, one Influenza A(H1N1)pdm09, one PIV 1, one AdV. Clinical and epidemiological characteristics of these patients are shown in Table 6 . The prevalence of respiratory virus in this study was 35.7%. In agreement with previous studies, picornaviridae viruses were the most frequently detected viruses, regardless of age [4-9, 12, 20, 21] . RSV was the second most frequent virus, and had the lowest hospital admission rate, even though it has been suggested that CF patients could be at risk for more severe infection by this agent [22] . One possible explanation is that severe infections are more common in patients under two years of age, whereas the median age of the patients in our study was 4.6 years old. Noteworthy, the mean age of RSV positive patients was two years old. Regarding clinical findings, change in sinus discharge was associated with viral detection (p 0.03), as would be expected and has been described by other authors [8, 9] . In patients under five years old, fever had a significant association with virus detection (p = 0.01), as also previously described [3] . Influenza has been found to be particularly associated with presence of fever [9] . In our study, the two influenza positive cases were older than five years and had this symptom indeed. Regarding treatment decisions, antibiotic prescription was not significantly different in patients with and without virus detection, corroborating that these situations may be difficult to clinically distinguish. On the other hand, oseltamivir prescription was extremely low, even when clinical symptoms were indicative for its empiric use. Only one patient received the antiviral treatment, which was started empirically and was later confirmed to have Influenza A H3N2. The second patient with influenza A was not treated because the drug was not started empirically and viral detection results were only available after 48 hours of symptoms, when the patient was already clinically improving. It is important to note that the vaccination rate for influenza was also low (58.1%), as previously reported [14, 15] . CF is an important risk factor for severe infection by influenza [23] , and adequate treatment and prevention of this condition must be encouraged. In contrast to other studies which showed PE to be more severe in cases with viral detection [3, 8, 14, 24] , our findings didn't show a significant difference between virus positive and negative groups, measured by the CFCS and by dyspnea, low oxygen and hospital admission rates. However, median age in these studies was older, including adolescents and adults, and it must also be stressed that the limited size of the sample may have impacted the power of statistical analysis. Viral infections have been shown to increase susceptibility to bacterial colonization of the respiratory tract [25, 26] . Since this colonization plays a major role in the progression of lung disease in CF, the relationship between these agents in CF is an important object of study. RSV has been shown to promote Pseudomonas colonization in CF patients [27, 28] , and the same has been suggested for picornavirus [11] . In patients already chronically colonized by this bacteria, RV may increase liberation of planktonic bacteria from the biofilm [29] which is associated with new bacterial infection. On the other hand, a study by Chin et al didn't find an increase in Pseudomonas density in sputum during viral infections [30] . As in most previous studies [9, 10, 22, 31] we found no difference of pseudomonas prevalence in patients with viral detection. It has also been suggested that the presence of bacteria in the respiratory tract of CF patients may favor viral infection [25] , but more studies regarding the specific role of Pseudomonas in this interaction are still needed. In the present study, virus detection did not significantly differ between different types of previous bacterial colonization. It is important to consider that in developing countries such as Brazil, the age of Pseudomonas colonization tends to be younger than in wealthier countries where most of these cited studies were performed. In our population, 57,1% of samples belonged to children with previous intermittent or chronic P. aeruginosa colonization. The prevalence of respiratory virus, although significant, was smaller than previously reported in some prior studies that also applied molecular technics [3] [4] [5] [6] 11 ]. An explanation for this difference is the broader PE definition criteria [3, 5, 6] and the inclusion of samples from patients with upper respiratory tract infections without PE [4, 11] . In our perspective, a more open inclusion criteria eventually included milder cases, less important for the course of the disease or the clinical conduct. In order to account for viral seasonality patterns, samples were collected through 12 consecutive months. On the other hand, some studies were conducted only throughout autumn and winter seasons, when there's a higher circulation of respiratory virus, which may contributed to higher viral detection rates [3, 4, 7] . The study's main limitations were the size of the sample and timing of sample collection. In a previous study in which patients were trained to self-collect samples as soon as the first signs of respiratory disease appeared and mail them to the laboratory, virus detection rates reached 81% of 43 samples investigated [6] . In addition, we did not test for coronavirus or bocavirus. Even though these agents have been usually shown to have a small prevalence in PE [3, 5-7, 10, 12, 20] , this may have had some impact on the total viral detection rates, and especially on the seven samples in which neither virus nor bacteria were identified. Our findings corroborate that respiratory virus have a significant prevalence in pulmonary exacerbations in CF. Their detection was associated with change in sinus discharge and in children <5 years with fever. Routine testing for these agents may help to better guide PE treatment and antibiotic use. Furthermore, as PEs have a great impact on morbidity and mortality in CF patients, the recognition of respiratory virus as an important agent in these conditions proves how fundamental it is to increase preventive strategies such as isolation protocols for patients and immunization. Unfortunately, there still are no effective vaccines, prophylaxis and treatments for most respiratory virus, however the viral diagnostic supported the rational use of antibiotics, avoiding its misuse. Longitudinal studies are still needed to better understand the relationship between virus and bacterial colonization in CF. Supporting information S1 Appendix. De-identified data set. (XLSX)
The concept of receptors for the Fc portion of immunoglobulins arose in the 1960s to explain cell-mediated biological activities of antibodies. 'Opsonins' indeed enabled antigen to enter phagocytic cells (Berken and Benacerraf, 1966) ; 'cytophilic' antibodies sensitized tissues that released histamine upon antigen challenge (Bloch, 1967) ; distinct classes of antibodies differentially regulated secondary antibody responses (Henry and Jerne, 1968) . These biological effects requiring the Fc portion of antibodies, the name 'Fc receptor' (FcR) was coined (Paraskevas et al., 1972) . FcRs for various antibody classes were identified as binding sites on a variety of cells (Vaughan and Boyden, 1964; Kulczycki and Metzger, 1974; Unkeless et al., 1988) . FcRs were characterized functionally and biochemically (Holowka et al., 1980; Ernst et al., 1993; Pfefferkorn and Yeaman, 1994) . Murine and human cDNAs encoding FcRs were cloned, sequenced, and expressed by transfection (Ravetch and Kinet, 1991) ; corresponding genes were located on chromosomes and their exon/intron organization was elucidated (Qiu et al., 1990) . The extracellular domains of FcRs were recognized as members of the immunoglobulin superfamily (IgSF) (Williams and Barclay, 1988) ; amino acid sequences enabling them to interact with antibodies, extracellularly (Hulett and Hogarth, 1994) , and to signal, intracellularly (Daëron, 1997) , were dissected; the 3D-structure of their extracellular domains in complex with immunoglobulin Fc portions was solved (Garman et al., 1998; Maxwell et al., 1999) . Finally, a collection of genetically modified FcR knock out (KO), knock in (KI), and transgenic mice was generated that enabled FcR functions to be delineated in vivo (Smith et al., 2012) . FcRs thus appeared as a family of functionally, structurally, and genetically related molecules that play major roles in antibody-dependent processes in physiology, in pathology, and, with the advent of passive immunotherapy, in therapeutics. Genes encoding FcR-related molecules were unexpectedly discovered, clustered with human FCR genes (according to the usual typographic convention, protein names are in roman type, whereas gene names are in italics; names of human genes are in upper case, whereas names of murine genes are in lower case), in the early 1990s (Imboden et al., 1989; Seaman et al., 1991) . Similar genes were found in the same clusters as mouse fcr genes (Figure 1) . A whole family of putative Fc receptor-like molecules (FcRLsthe abbreviation 'FcRL' is used instead of 'FCRL' for consistency with 'FcR') thus emerged (Davis et al., 2001; Hatzivassiliou et al., 2001) , whose existence was progressively confirmed (Li et al., 2014) . As FcRLs originated from genetic studies, much less is known of their biological functions, compared to FcRs. A syntenic chromosomal linkage, a similar genetic organization, a common membership of the IgSF suggest that FcRLs may be functionally related to FcRs. Supporting this assumption, both FcRs and FcRLs possess immunoreceptor tyrosine-based activation motifs (ITAMs), like B cell and T cell receptors (BCR and TCR) for antigen (Reth, 1989) , and/or immunoreceptor tyrosine-based inhibition motifs (ITIMs), like inhibitory receptors expressed by natural killer (NK) cells (Vivier and Daëron, 1997) . FcRs and FcRLs therefore belong to the immunoreceptor family. Differences in their structure, ligands, and pattern of expression, however, indicate that FcRs and FcRLs play distinct, complementary roles. I will discuss the genetic and phylogenetic relationships between FcRs and FcRLs; the structure and biological properties of FcRs and FcRLs; the tissue distribution and biological functions of FcRs and FcRLs; the roles of FcRs and FcRLs in health and disease; and the biological significance of FcRs and FcRLs within the immunoreceptor family. Human (h) FcRs comprise 'classical FcRs,' a receptor for IgA (FcaRI) (Pfefferkorn and Yeaman, 1994) , an MHC-related receptor (FcRn) (Simister and Rees, 1985) , and a lectin-like receptor (FcεRII) (Conrad, 1990) . Genes that encode classical hFcRs are within the FCR locus on chromosome 1, whereas the FCAR1 gene is in the leukocyte receptor complex (LRC) locus on chromosome 19 (Akula et al., 2014) . The LRC locus contains genes that encode the natural killer receptors (KIRs), the leukocyte Ig-like receptors (LILRs), and the leukocyte-associated Ig-like receptors (LAIRs), with which FcaRI shows a higher sequence homology than with classical FcRs. FCER2, the gene that encodes hFcεRII, is also located on chromosome 19. Noticeably, genes encoding signaling homodimers shared by FcRs, NK receptors, and T cell receptors also lie in the same two loci. Genes encoding FcRg and TCRx are in the FCR locus, whereas genes encoding DAP10 and DAP12 are in the LRC locus. FcRn stands for neonatal FcR because this IgG receptor was first observed in newborn mice. FcRn is related neither structurally nor genetically with classical FcRs. It is an MHC class I molecule encoded by a gene of the MHC complex on chromosome 6 ( Ghetie and Ward, 2000) . Mice have no equivalent of hFcaRI. Indeed, mouse genes encoding KIR-like molecules moved from the lrc complex to chromosome X, and the fcarl gene is thought to have been lost during translocation (Woof and Kerr, 2006) . Mouse (m) FcRs therefore comprise classical FcRs encoded by genes of the fcr locus, FcRn and FcεRII. The fcr locus, however, was split into two fragments. The gene encoding mouse high-affinity IgG receptors (mFcgRI) is on chromosome 3, while other classical fcr genes are on chromosome 1 (Akula et al., 2014) . fcer2, the gene that encodes mFcεRII, is on chromosome 8 (Conrad et al., 1993) . The gene that encodes mFcRn is among other MHC-I genes, on chromosome 17. Genes that encode FcRLs are in the same loci as genes encoding classical FcRs in both species (Figure 1 ). All human FCRL genes are in the single human FCR locus on chromosome 1. Murine fcrl genes are distributed in the two murine fcr loci, on chromosomes 1 (fcrla and b) and 3 (fcrl1, fcrl5, fcrl6, and fcrls) (Davis et al., 2002; Akula et al., 2014) . Bioinformatic, genetic, and phylogenic analyses in mammals, birds, reptiles, amphibians, bony fishes, cartilaginous fishes, and lampreys unraveled that classical FcRs and FcRLs first appeared together and remained closely linked during evolution, as their complexity increased in parallel with that of immunoglobulins. Genes encoding the IgA/IgM poly-immunoglobulin receptor (pIgR), FcRLs, and FcRg appeared first within the fcr locus, as well as genes homologous to mammalian genes of the LRC locus in early bony fishes. Noticeably, genes encoding FcRLs were the ancestors of genes encoding FcRs for IgG (FcgRI, II, III, and IV) and IgE (FcεRI), while duplicated sequences from the pigr gene provided sequences for genes encoding receptors for IgA/IgM (FcamR) and for IgM (FcmR) during early mammalian evolution (Akula et al., 2014) . The majority of classical FcRs therefore derive from FcRLs. Most FcRs and FcRLs are transmembrane molecules that generate intracellular signals when engaged by extracellular ligands. These biological properties depend (1) on the structure of their extracellular domains and their interactions with extracellular ligands and (2) on the signaling motifs in their intracytoplasmic domains and their ability to transduce signals across the plasma membrane and to generate productive signalosomes. The properties of nontransmembrane FcRLs are not well characterized. Some FcRs are single-chain immunoglobulin-binding molecules. These include IgG receptors (FcgRIIA, FcgRIIB, FcgRIIC, and FcgRIIIB), IgE receptors (FcεRII), IgM receptors (FcmR), and IgA/IgM receptors (pIgR and FcamR). Other FcRs are multichain receptors. They include IgA (FcaRI), IgE (FcεRI), and IgG (FcgRI, FcgRIIIA, FcgRIV, and FcRn) receptors (Daëron, 2014) . Multichain FcRs are composed of a specific immunoglobulin-binding subunit named FcRa and one or two common subunits. FcRg is a disulfide-bonded homodimer shared by most multichain FcRs (Orloff et al., 1990) . FcRb is a tetraspanin that associates with multichain FcRs in mast cells and basophils (Kurosaki et al., 1992) . Like other MHC-I molecules, FcRn associates with b-2 microglobulin (Israel et al., 1995) . FcRg and b-2 microglobulin are mandatory for the expression of multichain FcRs and FcRn, respectively. FcRb is mandatory for the expression of FcεRI in mice. All FcRLs are single-chain receptors. They comprise FcR Locus Figure 1 Human and murine Fc receptor (FcR) and Fc receptor-like molecule (FcRL) genes. Organization of the genes encoding FcRs (red) and FcRLs (blue) in humans and mice, on their respective chromosomes (Chr.) (Davis et al., 2002; Akula et al., 2014) . The figure was not drawn at scale. six transmembrane molecules in humans (hFcRL1, 2, 3, 4, 5, 6) and three in mice (mFcRL1, 5, 6), two intracellular molecules (FcRLA and B) in both species, and one soluble molecule (mFcRLs) in mice (Li et al., 2014) . Except FcεRII whose extracellular domain is a C-type lectin, transmembrane FcRs and FcRLs have extracellular domains made of variable numbers of IgSF domains. Three receptors have IgSF domains of the V-type. There are five such domains in the pIgR and one in FcmR and FcamR. Other mouse and human FcRs have IgSF domains of the C2-type. All have two such domains except FcgRI that has three. Human FcRL1 has three, FcRL2 and FcRL4 have four, FcRL3 has six, and FcRL5 has nine C2-IgSF domains. Mouse FcRL1 and FcRL6 have two and FcRL5 has five C2-IgSF domains. In both mice and humans, FcRLA and FcRLB also have IgSF domains, but these are intracellular, as well as a unique C-terminal mucin-like region (Li et al., 2014) . Most FcRs and FcRLs contain or are associated with subunits that contain tyrosine-based signaling motifs. In both humans and mice, one FcR only (FcgRIIB) contains an ITIM (Daëron et al., 1995a) . Except three low-affinity IgG receptors that are unique to humans (FcgRIIA and FcgRIIC, which contain an ITAM in their own intracytoplasmic domain, and FcgRIIIB, which has no intracytoplasmic domain), most other human and murine FcRs are constitutively associated with the ITAM-containing FcRg subunit. The FcRb subunit also contains an ITAM. The more distant FcRs pIgR, FcmR, and FcamR, as well as FcRn, have no known activation or inhibition motif. All transmembrane FcRLs contain ITIMs and/or ITAMs in their intracytoplasmic domain. Human and mouse FcRL1 contain two ITAMs, whereas hFcRL4 contains two ITIMs. Human and mouse FcRL6 contain one ITIM only. Human and mouse FcRL5, as well as hFcRL2, contain two ITIMs and one ITAM. hFcRL3 contains one ITAM and one ITIM (Akula et al., 2014; Li et al., 2014) . An ability to bind immunoglobulins defines FcRs. Due to their structural and genetic parenthood with FcRs, FcRLs were expected to bind immunoglobulins too. Three FcRLs, hFcRL4, hFcRL5, and the intracellular hFcRLA, do but, in spite of extensive search, other FcRLs do not. Instead, mFcRL5 and hFcRL6 bind MHC molecules. The remaining FcRLs are orphan receptors. The affinity with which antibodies bind to FcRs depends both on the receptors and ligands. The binding of antibodies to FcRs is reversible and it obeys the mass action law. It is characterized by an affinity constant (K a ), calculated by dividing the association constant by the dissociation constant. The affinity constant is a characteristic of FcRs. One distinguishes two classes of FcRs. High-affinity FcRs have a K a between 10 7 and 10 10 M À1 (Kulczycki and Metzger, 1974; Unkeless and Eisen, 1975) . They can bind immunoglobulins as monomers, that is, not in complex with antigen. Low-affinity FcRs have a K a between 10 5 and 10 7 M À1 (Bruhns et al., 2009) . They cannot bind monomeric immunoglobulins. Both high-and low-affinity FcRs, however, bind immune complexes with a high avidity. A proportion of high-affinity FcRs are occupied in vivo, whereas low-affinity FcRs are not in spite of the high concentration of circulating immunoglobulins. They are therefore available for binding immune complexes. Occupied high-affinity FcRs, however, can be freed as bound antibodies dissociate (Mancardi et al., 2008) . The dissociation constant therefore critically determines the availability of high-affinity FcRs. High-affinity FcRs include IgA (FcaRI, in humans, and pIgR, in humans and mice), IgE (FcεRI, in humans and mice), and IgG receptors (FcgRI and FcRn, in humans and mice, and FcgRIV, in mice only). Low-affinity FcRs include IgE (FcεRII, in humans and mice) and IgG receptors (FcgRII and III, in humans and mice). Humans have three FcgRII (FcgRIIA, B, and C), and two FcgRIII (FcgRIIIA and B), whereas mice have one receptor of each type (FcgRIIB and FcgRIIIA) only. The diversity of hFcgRII and III is further increased by polymorphisms in their extracellular domains (H 131 R in hFcgRIIA (Warmerdam et al., 1990) , F 158 V in hFcgRIIIA (Ravetch and Perussia, 1989) , N 65 S, A 78 D, D 82 N, and V 106 I in FcgRIIIB (Ory et al., 1989) ). Altogether, 10 hFcgRs were described. FcRs are not isotype-specific. Antibodies of several isotypes can bind to one FcR. Vice versa, several FcRs can bind antibodies of one isotype. Thus, every FcgR can bind several subclasses of IgG, especially in humans where IgG1, IgG2, IgG3, and IgG4 bind similarly to hFcgRI; hFcgRIIA, B, and C; and hFcgRIIIA and B (Bruhns et al., 2009) . Also, mouse IgE can bind to mFcgRIIB and mFcgRIIIA (Takizawa et al., 1992) and to FcgRIV (Mancardi et al., 2008) . The ability of immunoglobulins to bind to FcRs also depends on the glycosylation of their Fc portion (Arnold et al., 2007) . Each heavy chain contains a covalently attached N-glycan at the highly conserved N 297 residue in its CH2 domain. Point mutations of this glycosylation site abrogate the ability of IgG antibodies to bind to FcgRs, but not to FcRn (Veri et al., 2007) . Other mutations that remove fucose residues from the glycan chain enhance the binding of antibodies to FcgRIIIA (Natsume et al., 2005; Niwa et al., 2005) . Recently, the Fc portion of immunoglobulins was found to oscillate between a 'closed' and an 'open' conformation which also determines their affinity for FcRs (Ahmed et al., 2014) . Thus, when having a closed conformation, IgE bind preferentially to FcεRI, whereas when having a closed conformation, they bind preferentially to FcεRII . The majority of FcRLs have no known ligand. These include hFcRLl, hFcRL2, hFcRL3, hFcRLB, mFcRLl, mFcRL6, mFcRLA, and mFcRLB. The two types of ligands identified are immunoglobulins and MHC molecules. Only hFcRLs were found to have an affinity for immunoglobulins. hFcRL4 binds heat-aggregated IgA, and hFcRL5 binds IgG of the different subclasses (Li et al., 2014) . Noticeably, IgG binds to hFcRL5 and to hFcgRs by different mechanisms. Binding indeed requires not only the Fc portion, but also the F(ab') 2 moiety of intact IgG through two independent binding events. Like binding to FcRs, binding to hFcRL5 requires glycosylated IgG (Franco et al., 2013) . Although intracellular, hFcRLA was also reported to have an affinity for IgA, IgM, and IgG. One human and one murine FcRL interact with MHC molecules. hFcRL6 has an affinity for MHC class II molecules and this affinity varies with the MHC-II haplotype (Schreeder et al., 2010) . mFcRL5 has an affinity for an MHC-related viral protein. This MHC class I-like molecule encoded by the cowpox virus also binds to NKG2D on NK cells (Campbell et al., 2010) . FcRs trigger signals when aggregated on cell membranes by antibodies and plurivalent antigens (Maeyama et al., 1986; Metzger, 1992) . Although the result is the same, the sequence of events leading to receptor aggregation is different for high-affinity and low-affinity FcRs. Monomeric antibodies bind first to high-affinity FcRs that are aggregated later, when a plurivalent antigen binds to receptor-bound antibodies. Antibodies bind first to antigen, generating immune complexes that can bind to and, therefore, simultaneously aggregate low-affinity FcRs. FcRL signaling is not well documented, due to the paucity of natural ligands known. It was mostly investigated using anti-FcRL antibodies expected to mimic FcRL natural ligands, sometimes on FCRLs expressed by transfection in a murine B cell line. FcRs can trigger activation signals and/or inhibition signals. The nature of signals primarily depends on molecular motifs contained in the intracytoplasmic domains of FcRs or of receptor subunits with which FcRs associate. ITAMs consist of two YxxL motifs separated by a 6-8 variable amino acid sequence (Reth, 1989) . ITIMs consist of a single YxxL motif preceded by a loosely conserved often hydrophobic residue at position Y-2 (Vivier and Daëron, 1997) . Internalization motifs enable FcRn and pIgR to transcytose IgG and/or IgA across polarized cells. Activating FcRs are FcaRI, FcεRI, FcgRI, FcgRIIA, FcgRIIC, FcgRIIIA, and FcgRIV. Upon receptor aggregation, ITAMs are phosphorylated by src family tyrosine kinases (Pribluda et al., 1994) , which initiates the constitution of dynamic intracellular signalosomes (Kent et al., 1994) . Not only activation signals are generated by activating FcRs, however. These, indeed, generate a mixture of positive and negative signals (Malbec et al., 2004) , the dominant effect of which is activation under physiological conditions. Under other conditions, though, such as an excess of antigen that leads to a hyperaggregation of FcRs, negative signals overcome positive signals and, paradoxically, activating FcRs prevent cell activation (Gimborn et al., 2005) . Inhibitory FcRs are FcgRIIB (Daëron, 1997; Ravetch and Bolland, 2001) . FcgRIIB generates inhibition signals only. Their inhibitory properties depend on the ITIM present in all murine and human FcgRIIB isoforms (Daëron et al., 1995a) . Unlike activating receptors, FcgRIIB does not signal upon aggregation. They trigger negative signals when they are coaggregated with activating receptors by immune complexes (Daëron et al., 1995b) . Under these conditions, the ITIM of FcgRIIB is phosphorylated by the same src family tyrosine kinase that phosphorylates ITAMs in activating receptors (Malbec et al., 1998) . Phosphorylated FcgRIIB recruits inhibitory molecules that are brought into signalosomes. This renders inhibition signals dominant over activation signals (Lesourne et al., 2005; Daëron and Lesourne, 2006) . The aggregation of identical FcRs only (homoaggregation) is a rare situation. Different FcRs are coaggregated when IgG immune complexes interact with cells that coexpress different FcgRs or when pluri-isotypic immune complexes bind to cells that coexpress FcRs for several classes of antibodies. Even when cells express one type of FcR only (e.g., FcgRIIB in murine B cells or FcgRIIIA in murine NK cells), immune complexes can coengage FcRs with other immunoreceptors (BCR in B cells or NKR on NK cells). Heteroaggregation, that is, the coaggregation of different types of FcR or the coaggregation of FcRs with other immunoreceptors, is actually a rule, rather than an exception, under physiological conditions. Because there are FcRs for all antibody classes, because immune complexes contain more than one class of antibody, and because most cells express more than one type of FcR, various combinations of FcRs can be engaged at the cell surface to form heteroaggregates with a nonpredetermined composition. FcRs can thus generate a variety of signaling complexes, depending on the relative proportion of ITAM-containing and ITIM-containing receptors that are coengaged by immune complexes on any given cell (Daëron, 2014) . Using specific antibodies that mimic FcRL ligands, FcRL signaling was found to obey similar rules as immunoreceptor signaling (Ehrhardt and Cooper, 2011) . The engagement of hFcRL1 or mFcRL1, which contains two ITAMs, generates activation signals. Like the BCR and the TCR, but unlike FcRs, FcRLs trigger both activation and proliferation signals. The engagement of the two-ITIM-and one-ITAM-containing hFcRL2, hFcRL5, and mFcRL5 generates a mixture of effects, the dominant effect of which is inhibition. Although it contains both activation and inhibition motifs, hFcRL5 does not signal upon aggregation. It requires to be coengaged with activating receptors for triggering negative signals. When hFcRL5 is coligated with BCR, the N-terminal hFcRL5 ITAM recruits the src kinase Lyn, which phosphorylates the ITIM, which in turn recruits the tyrosine phosphatase(s) SHP-1/2, which inhibits BCR signaling (Zhu et al., 2013) . Unlike hFcRL5, when expressed in Ramos B cells, the two-ITIM-containing hFcRL4 was constitutively phosphorylated and associated with SHP-1/2, suggesting that it could exert a constitutive negative effect (Sohn et al., 2011) . FcRs and FcRLs have no specific function per se. They transduce signals that trigger, inhibit, or generally speaking, control the functions of FcR-and FcRL-expressing cells. Responding cells are selected by the ligands their receptors interact with. Biological functions induced via FcRs and FcRLs therefore primarily depend on the tissue distribution of these receptors. Ultimately, they depend on the functional repertoires of FcRand FcRL-expressing cells. Except FcRn and pIgR, both FcRs and FcRLs are primarily expressed by cells of the hematopoietic lineage. FcRs, however, are expressed mostly, though not only, by myeloid cells, whereas FcRLs are expressed mostly, if not only, by lymphoid cells, especially B lymphocytes. Activating FcRs are expressed by myeloid cells of all types, that is, monocytes, macrophages, dendritic cells, polymorphonuclear cells of the three types, mast cells, etc. They are also expressed by NK cells , NKT cells, and intraepithelial g/d T cells (Deusch et al., 1991; Sandor et al., 1992; Woodward and Jenkinson, 2001) . FcgRIIIA were also reported on a subset of murine CD8 T cells (Dhanji et al., 2005) . Inhibitory FcRs are expressed by most myeloid cells and by B lymphocytes. Noticeably, human basophils express much higher levels of FcgRIIB than any other blood cells (Cassard et al., 2012) . A few nonhematopoietic cells, such as some endothelial cells and some tumor cells (Cassard et al., 2002) , also express FcRs. FcRn are expressed by many cells including epithelial cells, myeloid cells, and hepatocytes (Ghetie and Ward, 2000) . The pIgR is expressed by polarized epithelial cells, especially of the mammary gland and the gut (Kaetzel et al., 1991) . FcRLs have a much more restricted distribution in both humans and mice (Li et al., 2014) Biological responses induced by antibodies depend on the functional repertoire of FcR-expressing cells. The wide tissue distribution of FcRs therefore endows antibodies with a wide spectrum of biological functions. Antibodies, however, do not necessarily activate, they can as well inhibit those responses of cells that coexpress activating and inhibitory FcRs. FcRLs essentially regulate B cell functions. Noticeably, they appear to control differentially BCR-and TLR-dependent activation, proliferation, and differentiation of various B cell subsets. FcRs control the internalization of immune complexes. All cell types pinocytose and endocytose, some phagocytose, and others can transcytose. Specific cells can exocytose. They release granules that contain cytotoxic, vasoactive, or proinflammatory mediators and proteases. Many cells can synthesize and secrete cytokines, chemokines, or growth factors. Immune responses being pluri-isotypic and cells of different types sharing FcRs for the same isotypes, antibodies select heterogeneous, rather than homogeneous cell populations, when in complex with antigen. These populations comprise a mixture of FcR-expressing cells that are present, were recruited, and/or proliferated locally. Biological processes in which FcRs are involved are therefore a result of those of many cells. FcRLs differentially control B cell functions. Activation signals generated by the two ITAM-containing human and murine FcRL1 stimulate B cell proliferation, like signals generated by the BCR. Conversely, the ITAM þ ITIM-containing FcRL2-5 generally negatively regulate BCR signaling. However, when coligated with BCR, FcRL3 inhibited activation signals, whereas it enhanced B cell activation, proliferation, and survival when coligated with TLR9 . Likewise, the constitutive negative regulation of BCR signaling by FcRL4 was accompanied by a positive regulation of TLR9 signaling (Sohn et al., 2011) . Noticeably, while enhancing proliferation, the coligation of FcRL3 and TLR9 inhibited plasma cell differentiation and antibody production (Li et al., 2014) . When coengaged with BCR, mFcRL5 had antagonistic effects on Ca 2þ responses and on MAPK activation, which differentially controlled BCR-dependent signals in B1 B cells and in marginal zone B cells (Zhu et al., 2013) . These results altogether indicate that FcRLs which contain both ITAMs and ITIMs can differentially regulate (1) BCR-and TLR-dependent, that is, adaptive and innate signals, (2) activation versus proliferation and differentiation signals, and (3) B cell subsets. In Physiology Due to their cellular expression, FcRs control the many biological functions of myeloid cells, while FcRLs primarily regulate B cell activation and antibody responses. FcRs mediate most biological activities induced by antibodies. They are not readily accessible to investigation in physiology. FcRs were, however, shown to protect and transport immunoglobulins and to control adaptive immune responses. FcRn protects IgG from degradation (Huber et al., 1993; Raghavan et al., 1993; Junghans and Anderson, 1996) . It also transports IgG across the gut (Yoshida et al., 2004; He et al., 2008) and maternal IgG across the placenta (Palmeira et al., 2012) . The pIgR transcytoses IgA and IgM, especially through the mammary gland (Johansen et al., 1999) . Activating FcRs enhance MHC-I and II presentation of tumor antigens (Desai et al., 2007) , while FcgRIIB dampens dendritic cell maturation and antigen presentation Kalergis and Ravetch, 2002) . FcgRIIB therefore contribute to peripheral T cell tolerance (Desai et al., 2007) . Conversely, FcgRIIB expressed by follicular dendritic cells can 'present' T-independent antigens to B cells (Szakal et al., 1985; Mond et al., 1995) . Follicular dendritic cell FcgRIIB also prevent the Fc portions of IgG immune complexes from coengaging FcgRIIB with BCR and inhibit B cell activation (Tew et al., 2001; El Shikh et al., 2006; Wu et al., 2008) . Unlike immune responses to soluble antigen that are enhanced by IgG antibodies (Hjelm et al., 2006) , immune responses to particulate antigens such as erythrocytes are suppressed by minute amounts of IgG antibodies. This observation has provided the rationale for injecting Rh À mothers of Rh þ babies with anti-RhD antibodies to prevent hemolytic disease of the newborn. FcgRIIB-dependent negative regulation, however, does not account for feedback regulation by antibodies, which was altered neither in FcgRIIB-deficient mice (Heyman et al., 2001) , nor in mice lacking all FcgRs . IgE antibodies are potent adjuvants (Getahun et al., 2005) . When interacting with FcεRII on B cells, IgE immune complexes present antigen to T cells and enhance antibody responses of all classes (Westman et al., 1997) . This enhancement is antigen-specific because only FcεRII-expressing B cells that possess the specific BCR receive cognate T cell help (Hjelm et al., 2006) . Little is known of the roles played by FcRLs in physiology. Reasons are the limited knowledge on FcRL ligands, but also the small number of genetically engineered mice with altered fcrl genes available. Only transgenic mice with a targeted disruption of the fcrla and fcrlb genes, which encode the intracytoplasmic FcRLs with no known ligand, were published. FcRLA-deficient mice displayed an enhanced secondary (but not primary) IgG1 antibody response to a T-dependent particulate antigen like sheep erythrocytes. Responses to T-independent antigens or to soluble T-dependent antigens were unaffected (Wilson et al., 2010) . FcRLB-deficient mice displayed an enhanced IgG1 response to nitrophenylated chicken g-globulins. However, due to unexpected deletions of regulatory sequences, fcrlb À/À mice also had a reduced FcgRIIB expression that could account for the observed hyperresponsiveness (Masuda et al., 2010) . FcRs can both protect, as in infectious diseases, and be pathogenic, as in inflammatory diseases. FcRs are involved in protection against infections. Legionella (Joller et al., 2010) , Salmonella (Tobar et al., 2004) , and Toxoplasma (Joiner et al., 1990) are phagocytosed via FcRs. The neutralization of Bacillus anthracis toxin depends on FcRs (Abboud et al., 2010) . FcRgdeficient mice fail to control Leishmania major (Padigel and Farrell, 2005) or Mycobacterium tuberculosis (Maglione et al., 2008) infection. Conversely, FcgRIIB-deficient mice display an enhanced resistance to these bacteria. FcgRIIIB polymorphisms are associated with clinical malaria (Adu et al., 2012) , and FcgRI protected from plasmodium in mouse models (McIntosh et al., 2007) . Instead of being protective, antibodies may favor infection. If anti-Spike antibodies can prevent the severe acute respiratory syndrome (SARS) coronavirus from entering epithelial cells, they enable FcgR-expressing cells to be infected (Jaume et al., 2011) . Likewise, anti-HIV antibodies can use FcRs to infect monocytes (Jouault et al., 1991; Fust, 1997) . The role of mast cell and basophil FcεRI is well known in allergy. FcεRI-deficient mice are resistant to IgE-induced passive systemic anaphylaxis (PSA) (Dombrowicz et al., 1993) ; hIgE induce PSA in hFcεRI-expressing transgenic mice (Dombrowicz et al., 1996; Fung-Leung et al., 1996) . IgG1 antibodies can also trigger passive cutaneous anaphylaxis (PCA) when engaging mFcgRIIIA (Hazenbos et al., 1996) , and FcgRIV expressed by neutrophils accounted for active systemic anaphylaxis (ASA), together with FcgRIIIA . FcgRIIB-deficient mice display enhanced anaphylaxis (Takai et al., 1996; Ujike et al., 1999) . Both hFcgRI and hFcgRIIA triggered IgG-induced PSA and ASA in transgenic mice Mancardi et al., 2013) . Human mast cell FcgRIIA account for IgG-induced PCA (Zhao et al., 2006) . When coengaged on human basophils, FcgRIIA and FcgRIIB inhibit cell activation. Consequently, basophils failed to be activated by IgG immune complexes, and IgG immune complexes that coengaged FcgRs with FcεRI inhibited IgE-dependent basophil activation (Cassard et al., 2012) . FcgRIIB-deficient C57BL/6 mice develop a systemic lupus erythematosus (SLE)-like disease when aging (Ravetch and Bolland, 2001) . Anti-platelet antibody-induced thrombocytopenia was prevented in FcRg-deficient mice (Fossati-Jimack et al., 1999) . mFcgRI, IIIA, and IV were found to contribute to platelet depletion (Fossati-Jimack et al., 1999; , SLE (Seres et al., 1998) , hemolytic anemia (Meyer et al., 1998; Syed et al., 2009) , glomerulonephritis (Fujii et al., 2003) , and arthritis (Ioan-Facsinay et al., 2002; Bruhns et al., 2003; Mancardi et al., 2011) . hFcgRIIA induced thrombocytopenia purpura (Reilly et al., 1994) or arthritis (Pietersz et al., 2009) in transgenic mice. Antimyelin antibodies found in multiple sclerosis and antidopaminergic neurons antibodies found in Parkinson disease (McRae-Degueurce et al., 1988) are thought to activate FcR-expressing phagocytic cells. FcRg-deficient mice indeed displayed less or milder lesions in murine models of Alzheimer (Das et al., 2003) , Parkinson (He et al., 2002) , multiple sclerosis (Robbie-Ryan et al., 2003) , and ischemic stroke (Komine-Kobayashi et al., 2004) . Inversely, FcgRIIB-deficient mice had an enhanced disease susceptibility. FcRLs have been involved in three types of diseases, infectious diseases, autoimmune diseases, and proliferative diseases, which are linked to B cell abnormalities. When binding to integrins on B cells, the HIV envelope protein gp120 upregulates FcRL4 expression, which inhibits B cell proliferation (Jelicic et al., 2013) . The expression of FcRL4 is also upregulated in chronic infection by viruses such as HIV and hepatitis C virus (Charles et al., 2008; Moir et al., 2008) . SNPs in FcRL1-5 have been associated with several autoimmune disorders including rheumatoid arthritis, SLE, and Graves' disease. One SNP, the T 169 C variant, which affects an NF-kB-binding site in the FCRL3 promoter, enhances FcRL3 expression (Kochi et al., 2005) , making FCRL3 an autoimmune susceptibility candidate gene (Chistiakov and Chistiakov, 2007) . FcRL1-5 are upregulated in most B cell proliferative disorders including lymphoid leukemias, Burkitt, follicular, diffuse B cell, and mantle cell lymphomas (Li et al., 2014) . FcRL4, which is normally expressed by marginal zone B cells, is expressed in marginal zone leukemias. FcRL2 was associated with IGHV-unmutated aggressive chronic lymphoid leukemias. Therapeutic antibodies against cancer use FcRs as tools. The antitumor activities of Rituximab, a humanized anti-CD20 antibody that has been approved for B cell malignancies, and of Trastuzumab, an anti-HER2 antibody used in breast, ovary, and lung cancer, depend on FcgRs (Clynes et al., 2000; Manches et al., 2003) . The therapeutic effects of these mAbs were increased by enhancing their affinity for FcRn, which enhances their half-life (Ward and Ober, 2009) , and by removing fucose residues from their Fc portion, which increases their affinity for activating hFcgRIIIA (Natsume et al., 2005; Niwa et al., 2005) . Therapeutic antibodies against autoimmune or allergic inflammation use FcRs either as tools or as targets. Therapeutic strategies have been developed, aiming at coengaging FcεRI or FcεRI-bound IgE with mast cell or basophil FcgRII to prevent allergy (Zhu et al., 2002; Tam et al., 2004) . FcgRIIB indeed exerts a dominant inhibitory effect on FcgRIIA and FcεRI in human basophils (Cassard et al., 2012) . Anti-FcgRI (Ericson et al., 1996) and anti-FcgRIIIA antibodies (Clarkson et al., 1986) reduced symptoms in idiopathic thrombocytopenia. Anti-IgE antibodies (Omalizumab) used in asthma (Busse et al., 2001) , rhinitis (Casale et al., 2001) , and chronic urticaria (Kaplan et al., 2008) deplete plasma IgE (Djukanovic et al., 2004) and downregulate FcεRI on basophils and mast cells. Their efficacy was markedly enhanced, by increasing their affinity for FcgRIIB (Chu et al., 2012) . Initially conceived as a substitutive treatment of immunodeficiencies, intravenous Immunoglobulins (IVIG) proved efficient in arthritis, idiopathic thrombocytopenia, or SLE (Bayary et al., 2006) . IVIG Fc had similar effects as intact IVIG, suggesting a role of FcRs (Anthony and Ravetch, 2010) . The therapeutic effect of IVIG was enhanced by increasing their concentration in sialic acid-rich immunoglobulins (Kaneko et al., 2006) . The mechanism underlying this phenomenon remains unclear. FcRLs are potential therapeutic targets in B cell malignancies. Toxin-conjugated anti-FcRL1 mAbs have been used as an anti-pan-B cell (BCR and TCR) depleting reagent , while FcRL5, which is expressed by plasma cells, has been specifically targeted in multiple myeloma (Elkins et al., 2012) . FcRLs are also potential therapeutic tools in infectious diseases. Knocking-down FcRL4 (as well as other inhibitory receptors) in chronic viral infections indeed restored BCR-dependent B cell proliferation and HIV-specific antibody responses (Kardava et al., 2011) . FcRLs are more than Fc receptor-like molecules. FcRs and FcRLs indeed form a single family that shares genetic, structural, and functional properties. Genes encoding hFcRLs all lie in the FCR locus that contains the vast majority of genes encoding FcRs on chromosome 1. Likewise, genes encoding mFcRLs all lie in the fcr locus, even though one segment of this locus was translocated to chromosome 3. Importantly, fcrl genes were the ancestors of genes encoding FcgRI, FcgRII, FcgRIII, FcgRIV, and FcεRI, that is, the majority of classical FcRs, which appeared with early mammalians during evolution. These receptors account for most properties of IgG and IgE antibodies in humans and mice. FcRs and FcRLs, however, differ by their ligands. Most FcRLs do not bind immunoglobulins whereas, by definition, all FcRs do. All FcRLs, some single-chain FcRs, and the subunits with which multisubunit FcRs associate contain tyrosine-based signaling motifs. This makes the FcR/FcRL family a member of the wider immunoreceptor family which, itself, belongs to the IgSF. The immunoreceptor family, defined as gathering receptors that use ITAMs and/or ITIMs for signaling, contains also B cell and T cell receptors for antigens, as well as an increasing number of activating and inhibitory receptors (Daëron et al., 2008) . The majority of FcRs are ITAM-containing activating receptors; only one is an ITIM-containing inhibitory receptor. FcRLs contain ITAMs only, ITIMs only, or ITAMs and ITIMs. FcRLs may therefore have more subtle regulatory effects than FcRs. When engaged by immune complexes, however, FcRs form heteroaggregates in which variable numbers of ITIMand ITAM-containing receptors generate mixtures of positive and negative signals (Daëron, 2014) , as FcRLs that contain both ITAMs and ITIMs do, when engaged by their ligands. FcRs and FcRLs have markedly different tissue distributions. FcRs are expressed by myeloid cells and by some lymphoid cells, including B cells and NK cells. FcRLs are expressed by lymphoid cells, primarily B cells, but also T and NK cells. Myeloid cells thus express a variety of activating and inhibitory FcRs, but no FcRLs. B lymphocytes express a variety of activating and inhibitory FcRLs, as well as inhibitory FcRs, but no activating FcRs. NK cells and some T cells express activating and inhibitory FcRLs, as well as activating FcRs but no inhibitory FcRs. FcRs and FcRLs therefore control different functions of different cell types. When engaged by antigen-antibody complexes, FcRs use the many cells of the innate immune system for adaptive immune responses (Daëron, 2014) , whereas FcRLs differentially control responses of cells of the adaptive immune system (but also of NK cells) to adaptive and innate signalings (Li et al., 2014) . Finally, FcRs and FcRLs are also the relatives of other members of the immunoreceptor family encoded by genes of the LRC locus. These include LILRs A and B, ILTs, KIRs and KIRL, and NCR1, whose genes are all on chromosome 19 with FCAR1 in humans, and LIRA, PIRA/B, NCR1, whose genes are on chromosome 7, and KIRL genes on chromosome X in mice (Akula et al., 2014) . The vast majority of these receptors contain ITIMs, some contain ITAMs, and a minority contain both. Being expressed by myeloid cells, B cells, T cells, and NK cells, but also a variety of nonhematopoietic cells, these receptors are involved in a multitude of immune and nonimmune responses (Daëron et al., 2008) . It follows that altogether, receptors of the immunoreceptor family, among which FcRs and FcRLs, are major, complementary, regulators of innate and adaptive responses.
Coronavirus disease 2019 is a contagious disease with a rapid increase in cases and deaths since its first identification in Wuhan, China, in December 2019. [1] SARA-CoV-2, a new kind of coronavirus and previously known as 2019-nCoV, contributed to the tragedy. This kind of virus is familiar with the Middle East respiratory syndrome CoV and the severe acute respiratory syndrome CoV genetically. [2] Coronavirus disease 2019 (COVID-19) is highly infectious and can lead to fatal comorbidities especially acute respiratory distress syndrome (ARDS), [3] which is life-threatening. In addition, it will cause symptoms of nervous system, like headache. [4] The spike of proteins of COVID-19 could dictate tissue tropism using the angiotensin-converting enzyme type 2 receptor, which can be found in nervous system tissue, to bind to cells. As a result of the fact that there is no specific therapy for this new virus, dealing with the patient symptomatically become the only choice for the clinicians. [5] Traditional Chinese medicine(TCM), which originates from China thousand years ago, has been playing a significant role in the treatment of COVID-19. [6] Of TCM, acupuncture is a very important intervention, and has been applied all over the world. [7] A plenty of researches have shown that acupuncture, including plum-blossom needle, [8] has the function of alleviating the pain, [9] including headache. [10] This review aims to systematically evaluate the effectiveness and safety of plum-blossom needle for COVID-19-related headache by including multiple clinical trials published over the past 10 years. WG and JFL contributed equally to this work and should be considered co-first authors. This systematic review protocol was registered with PROSPERO 2019 (registration number: CRD42020199508). And the protocol report is in the base of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA) declaration guidelines. [11] The review will be performed in line with the PRISMA declaration guidelines. [12] 2.2. Inclusion criteria for study selection 2.2.1. Type of study. All randomised controlled trials (RCTs) about Plum-blossom needle for COVID-19-related headache which were reported in English and Chinese will be included. Trials with 2-arm or 3-arm parallel design will be also included. Non-RCTs, quasi-RCTs, case series, reviews, animal studies and any study with a sample size of less than 10 participants will be excluded. Patients with headache induced by COVID-19, regardless of sex, age, race or educational and economic status, will be included in the review. Experimental interventions include plum-blossom needle therapy. Control interventions would be western medicine therapy. The primary outcome is the time and rate of appearance of headache. The secondary outcome is the length of hospital stay. 2.3. Search methods for identification of studies 2.3.1. Electronic data sources. We will search the following sources for the identification of trials: The Cochrane Library, PubMed, EMBASE, Chinese Biomedical Literature Database (CBM), Chinese National Knowledge Infrastructure Database (CNKI), Chinese Science and Technique Journals Database (VIP), and the Wanfang Database. The searches were limited to articles published in 2020, but no language restrictions were imposed 2.3.2. Searching other resources. The reference lists of potentially missing eligible studies will be scanned ant the relevant conference proceedings will be scanned as well. The search strategy for PubMed is shown in Table 1 . The following search keywords will be used: plum-blossom needle; headache (eg, "head paint" or "head pains" or "pain, head" or "pains, head" or "cephalodynia" or "cephalodynias" or "cranial pain" or "cranial pains" or "pain, cranial" or "pains, cranial" or "cephalalgia" or "cephalalgias" or "generalized headache" or "generalized headaches" or "ocular headache" or "headache, ocular" or "headaches, ocular" or "ocular headaches" or "orthostatic headache " or "headache, orthostatic" or "headaches, orthostatic" or "orthostatic headaches" or "vertex headache" or "headache, vertex" or "headaches, vertex" or "vertex headaches" or "retroocular headache " or "headache, retro-ocular" or "headaches, retro-ocular" or "retro ocular headache" or "retro-ocular headaches" or "sharp headache" or "headache, sharp" or "headaches, sharp" or "sharp headaches" or "throbbing headaches" or "headache, throbbing" or "headache, throbbing" or "throbbing headaches" or "unilateral headache" or "headache, unilateral" or "headaches, unilateral" or "unilateral headaches" or "hemicrania" or "bilateral headache" or "bilateral headaches" or "headache, bilateral" or "headaches, bilateral" or "periorbital headache" or "headache, periorbita" or "headaches, periorbita" or "periorbital headaches"); COVID-19 (eg, "2019-nCoV" or "Wuhan coronavirus" or "severe acute respiratory syndrome CoV -2" or "2019 novel coronavirus" or "COVID-19 virus" or "coronavirus disease 2019 virus" or "COVID19 virus" or "Wuhan seafood market pneumonia virus"); randomized controlled trial (eg, "randomized controlled trial" or "controlled clinical trial" or "random allocation" or "randomized" or "randomly" or "double-blind method" or "single-blind method" or "clinical trial". The equivalent search keywords will be used in the Chinese databases. The following data will be extracted from the selected studies by 2 independent reviewers using a standard data extraction sheet: year of publication, country, general information, participant characteristics, inclusion and exclusion criteria, sample size, randomization, blinding methods, methods, control, outcome measures, results, adverse reactions, conflicts of interest, ethical approval, and other information. 2.5.3. Assessment of risk of bias and reporting of study quality. Two independent reviewers will access the quality of included literature and complete the Standards for Reporting Interventions in Clinical Trials of Acupuncture checklist with the Cochrane collaboration risk-of-bias assessment method. [13] 2.5.4. Measures of treatment effect. Dichotomous data will be presented as risk ratio and 95% confidence intervals, while continuous outcomes will be showed as standard mean difference 95% confidence intervals. The individual participant will the analytical unit. 2.5.6. Management of missing data. Finding the cause of the missing data will be the solution. And 1 of us will contact the authors if the cause is not found. This will be documented and the available data will be extracted and analyzed if the missing data cannot be obtained. 2.5.7. Assessment of heterogeneity. I 2 test will be used to quantified inconsistency and standard x2 test will be used to detect statistical heterogeneity. Studies will be considered to have homogeneity if the P value exceeds .1 and the I 2 value is less than 50%, and the fixed-effects model will be used. while studies will be considered to have significant statistic heterogeneity if the P value is less than .1 or the I 2 value exceeds 50%, and subgroup analysis will be used to explore the possible cause. And the random-effects model will be applied If the heterogeneity is still important. Review Manager Software V.53 will be used for data synthesis. The random-effects model will be used if the I 2 value is no less than 50%. The fixed-effects model will be used if the heterogeneity tests show little statistical heterogeneity. If there is meaningful heterogeneity that cannot be explained by any assessment, meta-analysis will not be performed. Subgroup analysis will be performed to explain heterogeneity if possible. Factors such as different types of control interventions and different outcomes will be considered. Sensitivity analysis will be conducted to test the robustness of the review conclusions if possible. The impacts of sample size, study design, methodological quality, and missing data will be evaluated. This paper will use the evidence quality rating method to evaluate the results obtained from this analysis. GRADE is generally applied to a large amount of evidence. It has 4 evaluation levels, namely, high, medium, low, and very low. GRADE was used to evaluate the bias, inconsistencies, discontinuities, and inaccuracies of test results. In the context of the system review, quality reflects our confidence in the effectiveness of assessment. [14] 2. This protocol will not evaluate individual patient information or affect patient rights and therefore does not require ethical Table 1 Search strategy for the PubMed database. approval. Results from this review will be disseminated through peer-reviewed journals and conference reports. This systematic review will be the first to assess the effectiveness and safety of plum-blossom needle for COVID-19-related headache, and its results will address a gap in the literature. The review contains 4 sections: identification, study inclusion, data extraction, and data synthesis. This review will aid doctors in the decision-making process for treating patients with symptoms of headache of COVID-19, and will provide information for patients and health policy makers. WG and JFL mainly contributed to this manuscript and joint first authors. GXH obtained funding. WG drafted the protocol. JFL makes the search strategy. GXH will obtain copies of the studies and WG will screen the studies to be included. Data extraction from the studies will be done by WG and JFL. JFL will put the data into Review Manager Software. Analyses will be conducted by WG. WG will draft the final review. GXH will act as an arbiter in the study selection stage. All authors have read and approved the final manuscript.
There is an urgent need for novel interventions for the prevention and treatment of Mycobacterium tuberculosis (Mtb). The current epidemic of Mtb carries a huge cost globally. The World Health Organization's (WHO) 2019 report highlighted the fact that Mtb is the world's most lethal pathogen, responsible for 1.5 million deaths in 2018 (World Health Organization, 2019) . It also represents a huge financial burden to those who fall ill in low-and middle-income countries, equating to a 50% loss of the household income, thus impeding progress in these emergent nations (Tanimura et al., 2014) . Of great concern is the increasing incidence of drug resistant Mtb, with over half a million new cases reported in 2018. With these most current statistics, the global community does not look set to reach the WHO End TB Strategy's 2020 milestone of a 20% decrease in Mtb incidence, achieving only a 6.3% between 2015 and 2018. The shortcoming on reaching this target is very likely to be worsened by the disruption to healthcare access caused by the COVID-19 outbreak (Manyazewal et al., 2020) . These figures make the case for a new approach to tackling tuberculosis (TB). Vaccination has thus far failed to provide sufficient protection (Dockrell and Smith, 2017) and current treatments fall short, with long treatment regimes, significant pulmonary damage despite curation, ineffective preventative treatments for latent TB and the ever-increasing prominence of drug-resistance (Sulis et al., 2016) . This highlights the need for new host-directed therapies (HDTs) that can improve the immune response to Mtb infection, resulting in the efficient clearance of infection while minimizing damage to host tissues. However, many proposed immunotherapeutics such as monoclonal antibodies and recombinant cytokines are both labile and expensive, thus may not be well-suited to many of the regions in which TB is endemic such as Sub-Saharan Africa and South-East Asia. HDTs which target cellular metabolism, particularly existing small molecule therapeutics which can be repurposed, may have the added advantage of being more cost-effective. To develop new therapeutic approaches, the interactions between the microbe and the host must be better characterized. When an aerosolized droplet containing viable Mtb bacilli is inhaled, it travels to the lower lung and is phagocytosed by the primary host cell for Mtb, the alveolar macrophage. The macrophage is a pivotal immune cell in Mtb infection, responsible for bacterial killing and the instruction of other immune cells. Macrophages are highly plastic, capable of taking on a range of phenotypes when activated depending on their microenvironment (Biswas et al., 2012) . In the context of Mtb infection, a spectrum of macrophage activation states is induced (Skold and Behar, 2008) , and this changes over time. It has been shown that macrophages can induce a pro-inflammatory phenotype capable of Mtb containment and granuloma formation in the initial stages of infection, however over time Mtb alters this phenotype, generating a macrophage more permissible to Mtb growth Refai et al., 2018) . The mechanisms by which the macrophage is co-opted by Mtb to promote an environment amenable to its growth are not clearly understood. Unraveling the processes that mediate this transition from a predominantly pro-inflammatory macrophage population to a permissive one may provide new targets that can be therapeutically manipulated to promote bacterial clearance. Metabolic reprogramming has become recognized as a key cellular process that controls the responses of immune cells (Pearce and Pearce, 2013) . Immune cells in different activation states preferentially induce different forms of metabolism to suit their energy requirements; classicallyactivated macrophages which can perform pro-inflammatory functions during infections vs. sustained regulatory functions carried out by alternatively-activated macrophages. Perhaps less appreciated, and what will be explored in this review, is the different metabolic intermediates generated that can act as signaling molecules and anti-microbial effectors during Mtb infection. These metabolic intermediates should be appreciated as important products in their own right, not merely by-products of the energetic demands of the cell. It should also be noted that macrophage activation status is distinct from macrophage developmental origin, and that these factors overlay to define macrophage phenotype. The term macrophage encompasses a range of cells that can have different origins, phenotypes and functions. In recent years, it has become accepted that macrophages fall into two developmentally distinct populations-recruited and tissueresident macrophages (Ginhoux et al., 2010; Schulz et al., 2012) . Recruited macrophages are of monocytic origin. Monocytes are derived from hematopoietic stem cell progenitors in the bone marrow which circulate in the peripheral blood until they migrate into tissues in response to growth factors, proinflammatory cytokines and microbial products (Epelman et al., 2014; Nourshargh and Alon, 2014) . Tissue-resident macrophages however take up residency in specific tissues during embryonic development and proliferate locally, maintaining the population throughout the animal's lifespan (Guilliams et al., 2013; Yona et al., 2013; Epelman et al., 2014) . Resident macrophages carry out homeostatic functions such as the clearance of cellular debris and processing of iron, as well as performing local immune surveillance (Davies et al., 2013) . Tissue resident macrophages from different tissues are transcriptionally, thus likely functionally and metabolically, divergent (Gautier et al., 2012) . The lung compartment houses two ontologically distinct populations of macrophages-tissue resident alveolar macrophages (AM) and recruited interstitial macrophages (IM). Data from rodent models has suggested that AM are derived from fetal monocytes during lung development and proliferate locally to maintain the population in the lung (Guilliams et al., 2013) , while IM are derived from circulating blood monocytes that are recruited to the interstitial space during infection . However, recent work by Byrne et al. has demonstrated that AM in the adult human lung are mostly peripheral in origin (Byrne et al., 2020) . The authors used single-cell RNA-sequencing of the cells from the bronchoalveolar lavage (BAL) fluid of sex mis-matched lung transplant recipients to show that the majority of AM were recipient-derived, inferring that the AM population are replenished from circulating precursors in the periphery. Under homeostatic conditions, the AM population monitors the lung independently of monocyte-derived macrophages. AM are long-lived (Maus et al., 2006) tissue resident macrophages with a high phagocytic capability and are believed to be the primary initial immune cell to interact with Mtb (Cohen et al., 2018) and are therefore key in determining the subsequent immune response to Mtb infection. Alveolar macrophages are unique in comparison to other tissue-resident macrophage populations in that they are in direct contact with the external environment, constantly being exposed to inhaled particulates, commensal bacteria and host-epithelial factors such as surfactant. The homeostatic activation state of AM has been controversial. A small population of IL-13-producing macrophages have been characterized in the lung compartment and this population increases in response to cigarette smoke, hinting that perhaps the normal population is more classically activated (Shaykhiev et al., 2009) . Recent evidence indicates that AM are relatively plastic in homoeostatic conditions. A study which examined the lungs of 6 normal donors used immunohistochemistry to determine the activation states of the AM present found that healthy lung tissue AM expressed neither classical nor alternative activation markers (Bazzan et al., 2017) . Interestingly, smoking and chronic obstructive pulmonary disease (COPD) increased the expression of both pro-and anti-inflammatory macrophage markers and the coexpression of these markers, highlighting that activation states do not have to be exclusive. The basal metabolic state of AM is believed to be distinct from that of peripherally derived macrophages. Gleeson et al. demonstrated using extracellular flux analyses that human AM are more reliant on oxidative phosphorylation than glycolysis and are metabolically similar to an alternatively activated monocyte-derived macrophage (MDM) (Gleeson et al., 2018) . Their results also showed that while the basal metabolic state of the AM was more quiescent, significantly higher metabolic reserves were measured in the AM, indicating that perhaps AM are metabolically programmed so as to maintain an anti-inflammatory environment in the lung while having the capacity to mount a swift response to infection if required. Huang et al. compared the metabolic profile of IM and AM in a murine model of Mtb infection and found that IM adopt a glycolytic profile in response to infection while AM upregulate pathways involved in fatty acid oxidation (FAO) . They also showed that IM cultured ex vivo secreted more lactate than AM, indicating that even basally IM are more glycolytically active. AM reside in a lipid-rich environment, surrounded by pulmonary surfactant which is a phospholipid monolayer which lines the alveolar surfaces. AM have been shown to upregulate the scavenger receptor CD36 in response to Mtb infection, and this both increases the uptake of surfactant lipids and generates a phenotype more permissive to Mtb growth (Dodd et al., 2016) . The lipid-centric environment and metabolism of AM may be exploited by Mtb to both fuel its growth and evade immune responses. The notion that metabolic profiles underpin immune function derives from the clear relationship between macrophage activation state and metabolism. Under homeostatic conditions macrophages utilize mitochondrial oxidative phosphorylation to metabolize glucose and generate ATP. When macrophages become classically activated a metabolic shift occurs from oxidative phosphorylation toward glycolysis despite the availability of oxygen (Hard, 1970) . Glycolysis converts glucose into pyruvate in the cytoplasm to produce ATP. During this glycolytic reprogramming, glucose preferentially generates lactate instead of pyruvate, and though glycolysis is less energy efficient, it can be quickly upregulated to allow rapid generation of cytoplasmic ATP. This is thought to meet the increased energy demands of activation. However, we know this metabolic reprogramming serves a further purpose than just meeting the ATP demand as cancer cells and T cells adopt Warburg metabolism to generate biosynthetic precursors to support cell division . However, macrophages (which are non-dividing) also adopt this metabolic profile. This glycolytic switch may be necessary due to the increased transcription and translation requirements of these cells which require the nucleotide building block ribose which is generated in the pentose phosphate pathway (PPP), a side branch of glycolysis; however blocking glycolysis has been shown to destroy the ability of the macrophage to contain Mtb growth (Gleeson et al., 2016) . Moreover, Mtb has been shown to block macrophage metabolism to evade eradication by the immune system (Cumming et al., 2018; Hackett et al., 2020) . We will now discuss these key metabolic pathways, the metabolites they generate and how they relate to Mtb infection, as summarized in Figure 1 . Glucose is transported into the cell through a glucose transporter, the primary rate-limiting glucose transporter in proinflammatory macrophages is GLUT1 (Freemerman et al., 2014) , encoded by the SLC2A1 gene. GLUT1 is upregulated in response to Mtb infection to supply the cell with the glucose required to sustain the induction of glycolysis (Braverman et al., 2016) . The first irreversible step of glycolysis is catalyzed by hexokinase (HK) and is a rate limiting step, converting glucose to glucose-6-phosphate. Though there are several isoforms of this enzyme, HK2 is the principal regulated form of this enzyme in most cell types (Wilson, 2003) and is inhibited by 2-DG and upregulated in response to Mtb infection (Braverman et al., 2016) . Furthermore, it has been shown to be downregulated in diabetic patients who are associated with increased risk for Mtb infection (Qu et al., 2012) . At this point glucose-6-phosphate can continue down the pathway of glycolysis or be converted into glycogen or oxidized by the oxidative branch of the PPP (Gottlieb, 2011) . The PPP is a parallel metabolic pathway that occurs alongside glycolysis, converting glucose-6-phosphate into NADPH which can be used in the generation of reactive oxygen and nitrogen species, and ribose-5-phosphate for the production of nucleotides. Activated macrophages have been shown to have increased PPP activity (Jha et al., 2015) and TLR stimulation suppresses carbohydratekinase like protein (CARKL), an inhibitor of the PPP pathway (Haschemi et al., 2012) , although its impact on Mtb infection is unclear and has yet to be formally addressed. Cumming et al. have performed extracellular flux analysis and carbontracing experiments on human MDM infected with live virulent Mtb as well as Bacillus Calmette-Guérin (BCG) and dead Mtb which have indicated that live Mtb is able to suppress macrophage energy flux while less virulent and dead forms of Mtb drive glycolysis (Cumming et al., 2018) . They showed that while dead Mtb enhanced the flux through the PPP, this was negatively regulated by live Mtb, indicating that Mtb may have evolved mechanisms to restrict this pathway as an immune evasion mechanism. FIGURE 1 | Key metabolic pathways and metabolic intermediates in Mtb immune responses. Mycobacterium tuberculosis bacilli are notated as Mtb in green, and processes upregulated and downregulated by infection indicated using green arrows ( ) and red lines (=) respectively. (A) Glycolysis converts glucose to lactate which may act as both a fuel source for Mtb and have direct antimicrobial effects. GLUT1, the glucose transporter, is upregulated in response to Mtb infection, and hexokinase 2 (HK2) is also upregulated to allow an enhanced glycolytic rate. Mtb may limit the induction of glycolysis by negative regulation of the rate-limiting enzyme phosphofructokinase 1 (PFK-1). Lactate dehydrogenase (LDH) is upregulated in infected macrophages, allowing enhanced conversion of pyruvate to lactate, which may act as an alternative fuel source for Mtb or a directly toxic antimicrobial mediator. There is evidence that Mtb negatively regulates this process. Pyruvate kinase M2 (PKM2) works in tandem with hypoxia inducible factor 1 alpha (HIF-1α) to allow transcription of IL-1β. (B) The pentose phosphate pathway produces NADPH which is used to generate reactive oxygen species (ROS) and nitric oxide (NO) which are directly antimicrobial, and NO has additional roles in the potentiation of glycolytic metabolism. Live Mtb may negatively regulate flux through this pathway to limit these actions. (C) A tricarboxylic acid (TCA) cycle break point leads to an accumulation of itaconate which can inhibit the Mtb enzyme isocitrate lyase (ICL), while also protecting the host from excessive inflammation by limiting the oxidation of succinate by succinate dehydrogenase (SDH) and inflammatory gene expression. A second TCA cycle break point leads to a build-up of succinate which can (i) lead to the generation of mitochondrial ROS (mtROS) which are directly antimicrobial and also support the production of IL-1β, stabilize HIF-1α to promote glycolysis by inhibiting prolyl hydroxylases (PHDs) and (iii) play a role in the induction of innate immune training. (D) Lipids are key cellular fuel sources exploited by Mtb to promote its growth and inhibit its destruction within the cell. Amino acids are also important in Mtb responses. Arginine can be metabolized into antimicrobial NO by inducible nitric oxide synthase (iNOS) or into ornithine by arginase 1 (Arg1) which is anti-inflammatory. Likewise, tryptophan can be broken down into kynurenine by indoleamine 2,3-dioxygenase (IDO) which is also anti-inflammatory. Glutamine has been shown to play roles in the production of IL-1β, generating NO and inducing innate immune training. Pro-inflammatory macrophages have been shown to induce glycolysis to meet increased energy demands and provide biosynthetic precursors to promote antimicrobial responses during inflammation (Tannahill et al., 2013) . The induction of glycolysis is orchestrated by the transcription factor hypoxia-inducible factor 1 (HIF-1) which acts as a master regulator of pro-inflammatory immune functions (Palazon et al., 2014) . HIF-1 has a large number of target genes, including the transporters and enzymes which constitute the glycolytic machinery (e.g., GLUT1, HK2, LDHA), but also many pro-inflammatory cytokines, chief of which is IL-1β. The molecular mechanisms by which this metabolic switch to glycolysis in response to infection is mediated are beginning to be unraveled. Pyruvate kinase M2 (PKM2) is a splice variant of the glycolytic enzyme pyruvate kinase, which was originally identified as being upregulated in cancer to enable Warburg metabolism (Christofk et al., 2008) . Palsson-McDermott et al. identified PKM2 as a key metabolic regulator which mediates HIF-1α activation in LPS-stimulated macrophages (Palsson-McDermott et al., 2015) . They showed that LPS induced PKM2 expression and this mediated the binding of PKM2 and HIF-1α to a hypoxia response element (HRE) in the promoter of the gene encoding IL-1β. The authors additionally showed that this PKM2/HIF-1α/IL-1β axis was important for containing Mtb infection in an in vitro murine model. IL-1β has been implicated as one of the most important cytokines for containing Mtb infection. Mayer-Barber et al. demonstrated that IL-1β induces the expression of eicosanoids which limit excessive type I interferon signaling and promote bacterial containment (Mayer-Barber and Sher, 2015) . Braverman et al. have shown that HIF-1α is an essential gene for the control of Mtb infection and additionally that the macrophage-activating cytokine interferon-γ (IFN-γ) acts through HIF-1α to induce glycolytic reprogramming and promote bacterial containment (Braverman et al., 2016) . Gleeson et al. went on to show that Mtb infection induces an immunometabolic shift to glycolysis in several macrophage models including human alveolar macrophages and that this glycolytic response is essential for sufficient induction of IL-1β and control of Mtb growth (Gleeson et al., 2016) . Our work has shown that Mtb negatively regulates this induction of glycolysis over time by limiting the induction of an isoform of the glycolytic rate-limiting enzyme phosphofructokinase 1 (PFK-1) (Hackett et al., 2020) . PFK-1 is a tetrameric enzyme that can be composed of different combinations of the muscle (M), platelet (P) and liver (L) isoforms of the enzyme. Each isoform is encoded by a different gene and isoforms are differentially expressed in different tissues. RNA-seq analysis of murine BMDM showed that PFK-L and PFK-P are upregulated 24 hours post Mtb infection, however PFK-M is not (Shi et al., 2015) , and our work showed that Mtb induced microRNA-21 which directly limited PFK-M expression and consequently dampened glycolytic induction and antimicrobial responses. MicroRNA have emerged as key molecules involved in regulating a range of cellular processes, including innate immunity (Momen-Heravi and Bala, 2018). In addition to miR-21, other microRNA species have been implicated in the pathogenesis of Mtb infection, reviewed by Behrouzi et al. (2019) . For example, miR-33 has been demonstrated to be induced by Mtb infection and limit lipid metabolism and autophagy thus promoting Mtb survival (Ouimet et al., 2016) . Another recent study has demonstrated that multi-drug-resistant strains of Mtb modulate macrophage metabolism to limit IL-1β responses (Howard et al., 2018) . Together these observations underline how critical adequate macrophage glycolytic activation is to Mtb immunity, and how Mtb has evolved mechanisms to limit metabolism and evade the immune response. Although it is well established that fatty acids are a key source of energy for Mtb which feed central carbon metabolism (McKinney et al., 2000; Marrero et al., 2010) , macrophage metabolic reprogramming may alter the availability of these lipids, and instead other carbon sources in the Mtb microenvironment can also be utilized. Lactate dehydrogenase (LDH) catalyzes the interconversion of pyruvate and lactate and accompanying interconversion of NADH and NAD+. LDHA is upregulated in response to metabolic reprogramming in macrophages and dendritic cells (Kelly and O'Neill, 2015; Braverman et al., 2016) . Accumulation of lactate, the final product of glycolysis, is enhanced in response to increased rates of glycolysis and can be used as a surrogate marker of glycolytic activity (Rogatzki et al., 2015) . The view of lactate as a waste product of metabolism has begun to be re-examined. Lactate has been shown to inhibit T cell migration, polarize tumor-associated macrophages toward an M2-phenotype (Colegio et al., 2014) , inhibit pro-inflammatory macrophage responses and glycolytic programming and drive dendritic cells toward a more tolerogenic phenotype (Errea et al., 2016) . Lactate has also recently been reported to facilitate Mtb growth by acting as an additional carbon source (Billig et al., 2017) . Lactate has been shown to be present in significant quantities at the site of infection and the granuloma (Shin et al., 2011; Somashekar et al., 2011; Shi et al., 2015) , both intra-and extracellularly (Serafini et al., 2019) . Billig et al. showed that Mtb can use lactate as its sole carbon source in vitro (Billig et al., 2017) , though at higher concentrations it was found to be toxic to the bacterium. Interestingly, a mutant Mtb strain that lacked the required lactate dehydrogenase gene (Lld2) to process lactate showed sensitivity to the toxic effects of lactate even at lower concentrations, indicating that perhaps lactate is used as a fuel by Mtb not only because of its availability but also to remove it from its environment. Lactate has been shown to inhibit the growth of other bacteria through the generation of reactive oxygen species (ROS) (Abbott et al., 2009) , thus the oxidation of lactate to pyruvate by Mtb may be both for fuel and protective purposes. A study which compared two clinical isolates of Mtb found that in a lipid-poor environment, one of the strains upregulated Lld2, further indicating a role for lactate as a substitute fuel source (Baena et al., 2019) . Additionally, genome analysis of lineage 4 Mtb genomes (the most common and globally distributed Mtb lineage) identified several mutations in the promoter and protein coding regions of Lld2 which had independently arisen over a hundred times (Brynildsrud et al., 2018) . The codon mutations were then further identified in other Mtb lineages and associated with a significant positive effect on transmissibility. More evidence for the alternative role of lactate as a molecule upregulated by the macrophage to combat Mtb infection comes from the finding that infected macrophages upregulate LDH-A through HIF-1α (Osada-Oka et al., 2019), allowing increased conversion of pyruvate to lactate. HIF-1α-deficient macrophages were found to have significantly higher levels of intracellular pyruvate, and LDH-A-deficient macrophages were not as proficient at containing Mtb growth as wild-type cells. This study indicated that pyruvate is the preferred intracellular carbon source over glucose, thus the upregulation of LDH in response to Mtb infection may be a defense mechanism to both deprive the bacterium of fuel and boost anti-mycobacterial ROS generation. De Carvalho et al. recently demonstrated that pyruvate and lactate are in fact superior carbon sources for Mtb than glucose and fatty acids but only when oxygen is plentiful (Serafini et al., 2019) . Recent evidence from Cumming et al. shows that while lactate and pyruvate are both enriched in the supernatant fluid of human monocyte-derived macrophages infected with BCG and dead Mtb, live virulent Mtb decreased the production of these metabolites, indicating a negative regulation of this process to aid in immune evasion (Cumming et al., 2018) . Our work found that neither dead nor live virulent Mtb induced LDHA in human alveolar macrophages (Hackett et al., 2020) , and together these findings indicate that the glycolysis/LDHA/lactate axis is an important antimicrobial mechanism which Mtb actively suppresses. The Krebs cycle is becoming increasingly recognized as a central regulatory component of the immunometabolic programme. Also known as the tricarboxylic acid (TCA) and citric acid cycle, immunologists are reconsidering this process as more than a way of generating energy but rather as a pivotal system through which metabolites which can regulate the immune response are generated. LPS-activated macrophages have been demonstrated to suppress oxidative phosphorylation through the Krebs cycle and to accumulate the intermediates succinate and itaconate (Tannahill et al., 2013) . It is hypothesized that there are two metabolic "breaks" in the cycle which occur in response to LPS stimulation. The first breakpoint occurs at the third step of the cycle where isocitrate dehydrogenase (IDH) converts isocitrate to alpha-ketoglutarate. Activated macrophages downregulate Idh mRNA expression to limit this conversion, with an associated accumulation of itaconate (Tannahill et al., 2013) . The second breakpoint is thought to occur in the final step of the cycle at complex II of the electron transport chain, succinate dehydrogenase (SDH), where fumarate is converted to succinate and an accumulation of succinate occurs (Jha et al., 2015) . However, whether these TCA cycle breakpoints occur in the case of Mtb infection remains to be verified, the impact of these metabolites in Mtb responses is important. In recent years, succinate has begun to be appreciated as a metabolite that accumulates in pro-inflammatory macrophages and functions as an immune signal (Tannahill et al., 2013) . Succinate can play several roles in amplifying the immune response. Tannahill et al. observed that succinate accumulates following macrophage activation and signals through HIF-1α to induce IL-1β production (Tannahill et al., 2013) . While NFκB is thought to be responsible for the majority of the early induction of IL-1β, HIF-1α can also induce IL-1 transcription with both human and murine IL-1β genes containing HIF binding sites (Fang et al., 2009; Tannahill et al., 2013) , and this is thought to be responsible for the sustained induction of IL-1 later in the course of inflammation In the inflammatory macrophage, HIF-1α can also be stabilized even in normoxia by the elevated levels of succinate which inhibits PHDs (Tannahill et al., 2013) , allowing IL-1β transcription to occur. This succinatemediated stabilization of HIF-1α works in tandem with the PKM2-mediated trans-activation already discussed (Palsson-McDermott et al., 2015) . Oxidation of succinate has additionally been shown to play a key role in the inflammatory process, allowing the Krebs cycle to serve as a ROS generation system (Mills et al., 2016) . Both succinate oxidation by SDH and increased mitochondrial membrane potential are required for ROS generation which is in turn required for the HIF-1α dependent potentiation of IL-1β signaling (Tannahill et al., 2013) . An in vitro model where SDH was inhibited in mice which were injected with LPS showed that this SDH inhibition reduced inflammation, decreasing the production of the pro-inflammatory cytokines TNF-α and IL-1β and enhancing the production of anti-inflammatory IL-10 (Mills et al., 2016) . Garaude et al. have additionally shown that live E. coli bacteria (but not dead bacteria) are able to alter the assembly of electron transport chain supercomplexes which contributed to anti-bacterial responses (Garaude et al., 2016) . These studies highlight the importance of mitochondrial ROS as an antimicrobial signal. In the case of Mtb infection, while the glycolytic machinery is upregulated, there is a concomitant decrease in the expression of Krebs cycle genes including SDH subunits SDHA, SDHC and SDHD, which would likely contribute to succinate accumulation (Shi et al., 2019) . Murine lungs infected with Mtb have been shown to have an accumulation of succinate (Shin et al., 2011) indicating that this metabolic break is of functional importance in Mtb in vivo infection, however the Mtb model has not been specifically looked at in this context. Cumming et al. found increased flux in succinate production in response to BCG infection of human MDM, however this was not observed for Mtb (Cumming et al., 2018) . Given that SDH upregulation has been shown to be important for ROS generation and HIF-1α stabilization, both of which have been demonstrated to play a role in Mtb responses, it would be interesting to examine the dynamic contributions of succinate and SDH in Mtb infection. Succinate has also been shown to bind a G-protein coupled receptor now known as SUCNR1 (He et al., 2004) . Following activation of the macrophage with LPS, succinate accumulates and SUCNR1 expression increases, and succinate signaling leads to an enhanced IL-1β response (Littlewood-Evans et al., 2016) . IL-1β in turn enhances SUCNR1 and thus a positive feedback loop occurs. More recently, Keiran et al. have demonstrated a role for succinate/SUCNR1 signaling in promoting an antiinflammatory phenotype in adipose tissue resident macrophages and protected the host from tissue inflammation both in homeostasis and in metabolic stress (Keiran et al., 2019) . These confounding results may suggest the role of succinate in SUCNR1 signaling may be tissue or context specific. The role of SUCNR1 in Mtb infection has yet to be established, however given that succinate accumulates in Mtb infection and SUCRN1 is implicated in determining macrophage phenotype in the context of inflammation, its role in Mtb infection may be important. SDH activity is in turn regulated by another metaboliteitaconate. Immune-responsive gene 1 (Irg1) produces itaconate from the Krebs cycle intermediate aconitate (Michelucci et al., 2013) , and this itaconate was shown to inhibit the growth of microorganisms including Mtb in liquid culture. Irg1 expression and thus levels of itaconate are increased following macrophage stimulation with LPS (Strelko et al., 2011) . Itaconate was shown to inhibit SDH-mediated oxidation of succinate and thus limit pro-inflammatory responses (Lampropoulou et al., 2016) . Itaconate has additionally been shown to limit inflammation by activation of the transcription factor Nrf2 (Mills et al., 2018) which limits inflammatory gene expression and downregulates type 1 interferons. Itaconate has been measured in the Mtbinfected murine lung (Shi et al., 2019) , while Irg1 has been shown to be highly upregulated in the murine macrophage and lung after Mtb challenge . As well as limiting the oxidation of succinate and activating Nrf2, itaconate inhibits the activity of the microbial enzyme isocitrate lyase (ICL1) (Williams et al., 1971) . ICL1 is part of the glyoxylate shunt which is thought to be an adaptation to low-glucose environments such as the phagolysosome (Luan and Medzhitov, 2016) allowing the bacterium to be fueled by 2carbon compounds (Lorenz and Fink, 2002) . ICL1 has shown to be upregulated in phagocytosed Mtb bacilli (Graham and Clark-Curtiss, 1999) and has shown to be required for longterm persistence of Mtb in a murine infection model (McKinney et al., 2000; Munoz-Elias and McKinney, 2005) . Wang et al. identified an enzyme in the virulent Mtb strain H37Rv which is capable of degrading itaconate (Wang et al., 2019) , deletion of which reduced the bacterial burden in a murine model of infection. Irg1 has been shown to be essential for host survival in murine Mtb infection, with Irg1 knockout mice exhibiting excessive inflammation when infected with Mtb and a higher bacterial burden (Nair et al., 2018) . Nair et al. also carried out this experiment with a strain of Mtb in which icl1 had been deleted. Irg1 knockout mice were still unable to control Mtb infection and were killed by the infection, thus Irg1 and itaconate likely control Mtb growth in vivo independently of the inhibitory effect on ICL1 that has been noted in vitro. Likely the combined effects of succinate-driven pro-inflammatory effector mechanisms and itaconate-driven resolving mechanisms attempt to clear Mtb infection while minimizing damage to the host tissue. In line with this postulation is the recent findings of a metabolomic study of mouse lung following Mtb infection which noted an increase in succinate 4 weeks post-infection which was dramatically reduced by week 9 of infection, while itaconate steadily accumulated throughout this time course (Fernandez-Garcia et al., 2020). It should be noted that our understanding of these events may be about to shift. Recent findings by Palmieri et al. have identified nitric oxide (NO) as a mediator of Krebs cycle changes during the inflammatory response (Palmieri et al., 2020) and indicated that this model of a "break" in the Krebs cycle leading to succinate, itaconate and citrate accumulation and thus metabolic reprogramming of the cell may need revision. The authors found that NO directly mediated the metabolic reprogramming through aconitase 2 and PDH, rather than being a downstream result of rewiring. Braverman et al. have previously found that NO is required for HIF-1α stabilization during Mtb infection and negatively regulates NFκB signaling to limit inflammation (Braverman and Stanley, 2017) . In the case of the repurposing of the TCA cycle to generate ROS and immunometabolites it is yet to be determined whether impaired glycolysis and thus a reduced availability of pyruvate to feed flux through the TCA cycle is the initial event in reprogramming, or rather a break in the TCA cycle and accumulation of intermediates which causes an upregulation in glycolysis. In the context of Mtb infection, this is even less clear cut, with some reports finding oxidative phosphorylation intact (Hackett et al., 2020) , perhaps being fueled by alternative pathways such as intracellular lipids liberated by FAO (Knight et al., 2018) . When examining the relationship between the immune response to Mtb infection and metabolism, the fuel preferences of both the macrophage and the bacterium, and the consequent impacts of the by-products of the utilization of these fuels by both species must be taken into account. In the context of Mtb infection, murine IM and AM have been shown to be metabolically and functionally distinct. Huang et al. have shown using fluorescent Mtb reporter strains that infected IM adopt a pro-inflammatory, glycolytic phenotype which produces IL-1β and NO and are better at clearing the invading pathogen, while AM are more reliant on FAO, produce type 1 interferons and provide a more permissive environment for Mtb growth . Mtb requires a carbon fuel source in order to perform its metabolism, and lipids and fatty acids have long been recognized as its preferred energy source. The dominating FAO metabolism of the AM induced by Mtb infection creates a nourishing, permissive environment for the bacterium to replicate. Mtb uses host lipids (fatty acids and cholesterol) for its optimal colonization of the host. Cholesterol import by Mtb as a source of carbon has been shown to be essential for long-term infection of a murine host and for replication inside activated macrophages (Pandey and Sassetti, 2008) . Aberrant cholesterol status has been linked to poorer Mtb responses-hypercholesterolemia has been correlated with Mtb risk in a human study (Soh et al., 2016) , while apoE-/-hypercholesteremic mice were shown to have a higher bacterial burden and more severe lung damage (Martens et al., 2008) . Cholesterol accumulation has been linked to inhibition of phagosomal maturation (Huynh et al., 2008) , a clear advantage to Mtb, as well as impaired autophagy and thus less bacterial killing (Chandra and Kumar, 2016) . Oxidized low density lipoprotein (oxLDL) has also been shown to accumulate in granulomas and have been associated with increased Mtb growth in an in vivo guinea pig model (Palanisamy et al., 2012) . oxLDL is resistant to lipolysis and encourages lipid accumulation within the macrophage and poor efflux of cholesterol (Brown et al., 2000) . Vrieling et al. have recently linked oxLDL and impaired macrophage responses to Mtb and proposed this as a contributing element to the increased risk of Mtb infection in type 2 diabetics (Vrieling et al., 2019b) . They measured higher plasma oxLDL in diabetic patients and showed that in vitro oxLDL treatment of human macrophages significantly increased the bacterial burden by inducing cholesterol accumulation and a lysosomal dysfunction which impaired lysosome localization with Mtb. Fatty acids are also plentiful in the macrophage, particularly as the granuloma forms and matures (Kim et al., 2010) and are utilized by Mtb as a source of lipid building blocks (Lee et al., 2013) . In the case of Mtb, foamy macrophage generation can be induced by infection (D'Avila et al., 2006) , and these lipid-rich macrophages are present in high numbers in the Mtb granuloma (Peyron et al., 2008) . Foamy macrophages are generated when macrophage lipid intake and export are unbalanced and an accumulation of lipoproteins occurs, and this phenotype has been associated with other diseases, particularly atherosclerosis (Moore et al., 2013) . While atherosclerotic foam cells are cholesterol-dominated, Mtb granulomas are richest in triglycerides (Guerrini et al., 2018) , with mycobacterial ligands signaling through macrophage receptors to alter triglyceride content (Dkhar et al., 2014) . Geurrini et al. have shown in vitro that this lipid droplet formation in response to Mtb infection is driven by signaling through the TNF receptor which activates caspases and mTORC1 (Guerrini et al., 2018) . The exact mechanism of this has yet to be untangled but in other models peroxisome proliferator-activated receptor-γ (PPAR-γ) has been shown to promote lipogenesis (Li et al., 2014) , and this nuclear receptor has been implicated in regulation of the link between macrophage lipid metabolism and foam cell generation in Mtb infection. PPAR-γ is highly expressed in alveolar macrophages (Schneider et al., 2014) , the primary host cell for Mtb. Almeida et al. demonstrated that murine macrophages infected with BCG upregulate PPAR-γ in a TLR-2 dependent manner which enhanced lipid body formation and PGE2 synthesis (Almeida et al., 2009) , while Rajaram et al. have described PPAR-γ activation following Mtb phagocytosis in human macrophages which suppressed pro-inflammatory responses and enhanced Mtb growth (Rajaram et al., 2010) . Guirado et al. further demonstrated that PPAR-γ negatively regulates macrophage activity and impairs Mtb responses in an in vivo murine model (Guirado et al., 2018) . Interestingly, the essential vitamin B1 has been shown to enhance macrophage Mtb responses in a murine in vivo model by limiting PPAR-γ activation (Hu et al., 2018) . The induction of the foamy macrophage phenotype has been viewed as a mechanism by which Mtb can gain fuel and carbon building blocks from its host, however there is accumulating evidence that the induction of this phenotype may serve the additional purpose of blocking antimicrobial responses. Virulent strains of Mtb have been shown to induce the foamy macrophage phenotype and this blocks autophagy and lysosomal acidification (Singh et al., 2012) . Cumming et al. demonstrated that live Mtb increased macrophage dependency on exogenous fatty acids which would likely encourage lipid droplet formation, while dead Mtb did not have the same effect (Cumming et al., 2018) . Providing an alternative view of lipid droplet formation is the study from Knight et al. in which they have shown that Mtb droplet formation is not driven by the bacterium, but rather the host inflammatory response (Knight et al., 2018) . Using a murine model they have shown that lipid droplet formation requires IFN-γ and HIF-1α, and this in turn is required for the production of PGE2 and leukotriene B4 which are protective in Mtb infection (Mayer-Barber and Sher, 2015) , as well as being a method by which lipids are sequestered from Mtb. These findings challenge the current paradigm somewhat but still emphasize that fatty acid metabolism is a key process modulated by both the host and Mtb-the macrophage and the bacterium are in a metabolic arms race. Metabolic reprogramming may explain the formation of lipid droplets in response to Mtb via increased flux through other metabolic pathways such as the PPP. Our work has shown that Mtb negatively regulates the activity of PFK-1 (Hackett et al., 2020) , a key rate-limiting enzyme in glycolysis at which glucose derivatives can continue through glycolysis or be shuttled into the PPP which could fuel fatty acid synthesis. This targeting of glycolysis may aid in immune evasion by limiting glycolysismediated antimicrobial activities while simultaneously boosting the production of fatty acids and nucleotides. As well as serving as the biological building blocks of proteins, amino acids and metabolites derived from them are also able to act as direct anti-mycobacterial agents (Qualls and Murray, 2016) . Two amino acids which are associated with the alternatively activated macrophage, tryptophan and arginine, are also implicated in Mtb infection. Unlike bacteria, animals are unable to synthesize tryptophan and this essential amino acid which is required for a broad array of biological functions must be obtained in the diet. While tryptophan is known to have roles such as being the precursor to serotonin, the role of tryptophan catabolism through the kynurenine pathway is of importance to immune function (Moffett and Namboodiri, 2003) . Indoleamine 2,3-dioxygenase 1 (IDO1), the enzyme which catalyzes the catabolism of tryptophan, has been found to be significantly increased in the Mtb granuloma in non-human primate models (Mehra et al., 2013) and has been shown to be an effective biomarker for active tuberculosis infection (Adu-Gyamfi et al., 2017) . IDO1 plays an inhibitory role in inflammation by limiting the activity of CD4+ T cells, and its catabolism of tryptophan has been shown to be essential in mediating tolerance (Munn et al., 1998) both through tryptophan depletion and the accumulation of kynurenines, tryptophan metabolites which have an immunosuppressive effect (Belladonna et al., 2007) . Tryptophan metabolites have also been shown to promote TGF-β production by dendritic cells leading to the generation of regulatory T cells (Yan et al., 2010) . For other intracellular bacterial infections such as chlamydia, induction of IDO in the macrophage by IFN-γ provided by CD4+ T cells serves to deprive the pathogen of tryptophan (Byrne et al., 1986; Beatty et al., 1994) , however Mtb can synthesize tryptophan de novo and thus tryptophan depletion is not the mechanism of protection in the context of Mtb infection (Zhang et al., 2013) . Inhibiting Mtb tryptophan synthesis using a small molecule inhibitor to effectively make the bacterium a tryptophan auxotroph has been shown to improve Mtb containment (Zhang et al., 2013) . IDO1 activation in Mtb infection has in fact been associated with poorer outcomes in animal models (Foreman et al., 2016) and higher serum IDO1 activity (and therefore lower tryptophan, higher kynurenine concentration) has been associated with a worse prognosis in human patients (Suzuki et al., 2012) . IDO1 inhibition has been shown to improve mycobacterial containment in a nonhuman primate model of Mtb infection, enhancing CD4+ T cell penetration into the granuloma (Gautam et al., 2018) . Given that tryptophan is not synthesized by the human host, enzymes in the Mtb tryptophan synthesis pathway could be potential targets for novel treatments. Reactive nitrogen species are potent anti-microbial effectors and signaling molecules (Braverman and Stanley, 2017) . Arginine can be metabolized by macrophage nitric oxide synthase (NOS) to produce citrulline and NO. NO can be further metabolized into reactive nitrogen species, including nitrite, which can act as antimicrobial effectors against Mtb, as well as inducing glycolytic reprogramming in response to infection through stabilization of HIF-1α and acting in a resolving capacity by limiting NFκB signaling (Braverman and Stanley, 2017) . Inducible nitric oxide synthase (iNOS) is encoded by the NOS2 gene and is the isoform of this enzyme that can be upregulated upon inflammatory activation and can function independently of calcium, unlike the constitutive isoforms. Conversely, arginase 1, the cytoplasmic isoform of arginase which Mtb is known to specifically drive in infection, inhibits NO synthesis through several proposed mechanisms including competing with NOS for arginine as a substrate. Arginase-1 converts arginine into urea and ornithine from which hydroxyproline and polyamines can be generated for wound-healing (Mills et al., 2000) . Polyamines themselves can also inhibit iNOS activity (Southan et al., 1994) . Arginase-1 is upregulated downstream of TLR and cytokine-signaling and can reduce iNOS activity and impair the production of nitrite species (El Kasmi et al., 2008) . The balance between these enzymes is important in determining the fate of arginine in the macrophage and thus its potential to produce anti-microbial nitrogen species. Expression of arginase 1 characterizes the alternatively activated macrophage (Byers and Holtzman, 2011) and is associated with a reduced propensity for bacterial clearance (El Kasmi et al., 2008) . The iNOS/arginase-1 macrophage activation state paradigm is more clearly defined in the murine model, while signals in the human model that drive macrophage activation remain much more elusive, suggesting that macrophages in human Mtb infection fall on a spectrum of activation and skewing the population toward either end of this spectrum is what determines infection outcome. Non-human primates infected with Mtb have been shown to have both iNOSand arginase-1-expressing macrophages in granulomas, with pro-inflammatory iNOS-positive macrophages organized at the center of the granuloma surrounded by alternatively activated arginase 1-positive macrophages on the periphery, and this distribution is mirrored in human granulomas (Mattila et al., 2013) . To replace the depleted arginine in the macrophage, another amino acid, citrulline, can be converted to arginine (Wu and Brosnan, 1992) by the enzyme argininosuccinate synthase (Ass1). Citrulline has been shown to accumulate in murine lungs during the course of Mtb infection, coinciding with the upregulated expression of Ass1 in myeloid cells, deletion of which increased bacterial burden (Lange et al., 2019) . Furthermore, this arginine synthesized from citrulline has been shown to be used effectively by iNOS but is less susceptible to arginase 1 depletion than imported arginine (Rapovy et al., 2015) . Low plasma citrulline concentration has been observed in patients with active Mtb disease (Weiner et al., 2012) , with the ratio of citrulline to arginine being able to distinguish patient samples from controls (Vrieling et al., 2019a) . Thus, while arginine supplementation as a therapeutic strategy has had mixed reported efficacy (Schon et al., 2003; Ralph et al., 2013) , perhaps citrulline supplementation may hold some future therapeutic potential, having also been shown to aid in CD4+ T cell accumulation and activation in a murine Mtb infection model (Lange et al., 2017) . Arginine regeneration from citrulline may also be linked to another metabolic process, the Krebs cycle. The Krebs cycle of activated macrophages is downregulated due to decreased delivery of pyruvate and becomes functionally broken in two places leading to an accumulation of citrate (Infantino et al., 2011) and succinate (Jha et al., 2015) . During this inflammatory activation, an argininosuccinate shunt is engaged which bridges the urea cycle and the Krebs cycle (Jha et al., 2015) , thus potentially both generating fumarate for the Krebs cycle and arginine for NO production through Ass1 (Murray, 2016) . Recently Yurdagul et al. described a novel role for arginine whereby arginine and ornithine from apoptotic cells phagocytosed by macrophages is metabolized to putrescine by Arg1 and ornithine decarboxylase (ODC) to promote continued efferocytosis through Rac1 activation (Yurdagul et al., 2020) . Cell death is also a key process in the interaction between Mtb and the host. Mtb infection can induce necrotic cell death whereby the infected cell lyses and allows further spread of the bacilli, which is favorable for Mtb. The Mtb ESX1 secretion system has been shown to be the molecular driver behind this promotion of necrotic cell death and its absence partly responsible for the attenuation of the strain of Mycobacterium bovis used in the BCG vaccine (Pym et al., 2002) . Alternatively, apoptosis can be instigated, a controlled death program that maintains the integrity of the cell membrane and reduces Mtb survival (Behar et al., 2011) . Infection with Mtb has been shown not only to induce macrophage apoptosis, but also to induce apoptosis of neighboring uninfected macrophages (Kelly et al., 2008 ). This bystander apoptosis may limit Mtb survival by depriving it of its host cell. More virulent strains of Mtb have developed resistance mechanisms to combat this by blocking apoptosis (Velmurugan et al., 2007) . Apoptosis in and of itself is not effective at killing Mtb, however the process of efferocytosis of infected apoptotic macrophages has been demonstrated to be an important mechanism for promoting bacterial clearance (Martin et al., 2012) . It would be interesting to investigate whether the arginine/Rac1 mechanism of promoting continued efferocytosis is relevant in the case of Mtb infection. Like fumarate, glutamine can also be used as an alternative carbon fuel for the Krebs cycle, and a source of citrate in fatty acid synthesis. Glutamine plays many roles in facilitating immune function. Activated macrophages increase glutamine uptake, and it has long been known to be required for the production of IL-1β by LPS-activated macrophages (Wallace and Keast, 1992) . Macrophages from mice fed on a glutamine enriched diet were shown to produce more TNF-α, IL-1β and IL-6 in response to LPS stimulation (Wells et al., 1999) . Glutamine has also been shown to play a role in nitric oxide production (Murphy and Newsholme, 1998) , replenishing intermediates in the nitrite and urea cycles to maintain flux through the system. More recently, glutamine been shown to have an essential role in alternative activation of macrophages, with glutamine deprivation and consequent effects on the TCA cycle preventing polarization (Jha et al., 2015; Palmieri et al., 2017) . Activation of mTOR can be mediated by glutamine, and thus glutamine can play a role in the induction of autophagy through mTORC1 (He et al., 2016) . Several pathogens have been shown to alter glutamine metabolism. A recent study showed that infection of macrophages with Leishmania donovani increased expression of genes involved in glutamine metabolism, and the inhibition of glutaminolysis increased susceptibility to infection and was associated with a more anti-inflammatory recruited myeloid population and poorer T-cell mediated responses (Ferreira et al., 2020) . Cumming et al. have shown that Mtb infection creates a dependency on glutamine in infected human MDM (Cumming et al., 2018) . Koeken et al. have further explored the importance of glutamine in the case of Mtb infection . They showed that the glutamine transcriptome is significantly upregulated in macrophages in response to Mtb infection, both in vitro and in vivo, and that interference with either glutamine supply or its catabolism decreased cytokine responses to Mtb infection, particularly IL-1β. Additionally, they identified single nucleotide polymorphisms (SNPs) in genes belonging to the glutamine pathway that altered cytokine responses to Mtb in human peripheral blood mononuclear cells (PBMCs). Together these findings indicate an important role for glutamine in a robust response to Mtb infection, though the role for glutamine in an in vivo infection has yet to be tested. Glutamine has additionally been shown to be essential in the induction of innate immune training (Arts et al., 2016) , discussed below. Innate immune training or trained immunity is the concept that innate myeloid cells can mount a better immune response to a secondary exposure to a non-specific insult or pathogen (Netea et al., 2011) . The induction of this trained phenotype in response to a range of stimuli including the metabolite oxLDL, the fungal cell wall component β-glucan and whole microbes such as BCG , is dependent on immunometabolic reprogramming which mediates epigenetic changes (Kleinnijenhuis et al., 2012; Cheng et al., 2014) . A number of metabolic pathways and metabolites are being shown to have key roles in the mediation of training. For example, Arts et al. demonstrated that glycolysis, glutaminolysis and cholesterol synthesis are all essential for the induction of innate immune training (Arts et al., 2016) . They showed that β-glucan stimulation led to enhanced glycolysis and the accumulation of TCA cycle intermediate metabolites including fumarate and succinate and these metabolites mediated epigenetic reprogramming in the form of histone modifications to train these monocytes. Glutaminolysis replenishment of the TCA cycle led to fumarate accumulation which inhibited histone demethylases, allowing methylation of histones. Arts et al. additionally showed that a similar, epigenetically-mediated induction of innate immune training was generated in response to BCG . A metabolite of the cholesterol synthesis pathway, mevalonate, has been shown to enhance activation of insulin-like growth factor 1 receptor (IGF1R) and activate mTOR to induce histone modifications which induce innate immune training (Bekkering et al., 2018) and that this cholesterol metabolite-mediated mechanism can be inhibited by statins. Innate immune training may have a role to play in immunity to Mtb infection. It has been well documented that there are a group of individuals termed "early clearers" who come into contact with Mtb but don't develop active infection and remain tuberculin skin test (TST) negative, indicating that they clear Mtb without inducting an adaptive immune response (Pai et al., 2016 In addition to host-derived metabolites, the host microbiome is an additional source of metabolites that may influence both the survival of Mtb directly, and the host immune response. At present, the contribution of the host microbiome to Mtb disease is not well characterized, however observations from patient cohorts and studies in mouse and non-human primate models are starting to shape our understanding. HIV-infected individuals, even those on anti-retroviral treatment, are more susceptible to Mtb infection. One contributing factor may be the difference in the microbial metabolites present in the HIVlung. HIV patients have been shown to have an altered lung microbiome (Twigg et al., 2016) and Segal et al. found that short chain fatty acids (SCFA) including acetate, propionate and butyrate from lower airway anaerobic bacteria were enhanced in HIV+ individuals and that these metabolites caused inhibition of IFN-γ and IL17A and increased T regulatory cell generation in PBMCs from these patients stimulated ex vivo (Segal et al., 2017) . Another Mtb risk factor which may be linked at least in part to microbiome metabolites is type 2 diabetes. People with type 2 diabetes have been shown to have an altered gut microbiome (Larsen et al., 2010 ). An in vitro study on PBMCs showed that the SCFA butyrate suppressed pro-inflammatory cytokine expression while increasing production of IL-10 , hinting that perhaps microbial metabolites could be playing a role in increasing Mtb susceptibility. Indole-3propionic acid (IPA), a metabolite produced by members of the gut microbiome (Wikoff et al., 2009 ) was shown to inhibit Mtb growth in vitro and to lower splenic Mtb burden in a murine model (Negatu et al., 2018) . IPA is a close analog of tryptophan and was shown to exert its antimicrobial activity by acting as an inhibitor of an enzyme in the Mtb tryptophan biosynthesis pathway (Negatu et al., 2019) . Our understanding of the interaction between the microbiome and its metabolites and the host immune response to Mtb is in its infancy, however these findings indicate that in addition to host-derived metabolites, microbiome-derived metabolites may have a role to play in Mtb disease. Metabolism has emerged as a new frontier in the field of immunology, providing a better insight into the processes governing immune cell responses and providing a wealth of new therapeutic targets. Mtb infection remains a global health issue and understanding the host immune response to this bacterium is of key importance for developing novel host-directed therapies. Dysregulated metabolism is a common signature in a range of disease states beyond infection including cancer, thus therapies targeting cellular metabolic functions will have applications far beyond the scope of Mtb infection. The burgeoning field of immunometabolism is providing exciting insights into the molecular mechanisms which govern a plethora of immune processes. Metabolites which are generated or indeed depleted by the metabolic pathways altered in response to infection are becoming appreciated as essential immune molecules rather than by-products of other processes, acting as signaling molecules, direct antimicrobial agents or conversely as fuel for the invading pathogen. Understanding how these metabolites can be harnessed to enhance Mtb treatments is of great importance. The metabolites which have been proposed to play a role in Mtb infection are summarized in Table 1 . Dietary and pharmacological interventions which can alter the metabolites present at the site of infection may have potential to work in tandem with current treatments and vaccination programs to generate a more effective environment in which our immune systems can tackle Mtb. Repurposing existing drugs as supplemental agents in tandem with existing tuberculosis treatments may hold particular promise. For example, metformin, a drug commonly prescribed to type 2 diabetics, is an insulin sensitizer which targets complex I of the electron transport chain (El-Mir et al., 2000; Owen et al., 2000) and has been proposed as a supplementary therapy for Mtb which targets host metabolism (Oglesby et al., 2019) . Epidemiological evidence has indicated that metformin both lowers the risk of developing active tuberculosis and lowers the associated mortality rate (Tseng, 2018; Zhang and He, 2020) while it has been shown to improve mycobacterial containment and reduce pulmonary pathology in a murine model of Mtb infection (Singhal et al., 2014) , acting through multiple mechanisms centered around cellular metabolism including mitochondrial ROS generation . Many questions still remain to be answered, the precise role of the metabolites discussed here and their mechanisms of action are not well defined. Moving forward, comprehensive carbon tracing experiments over a time course of infection with both live and dead Mtb may elucidate the kinetics of metabolism during infection and reveal the metabolites which are being actively altered by virulent Mtb to aid its persistence. Additionally, specific metabolites and metabolic processes which have been identified as having immunometabolic roles in other contexts should be explored in relation to Mtb infection, for example the role of flux through the PPP and the relevance of the accumulation of TCA intermediates including succinate and itaconate in the case of Mtb infection are basic questions which have yet to be properly addressed. Thus, if we want to use host-directed therapies to win the war against Mtb, we must ensure our army of immune processes is well fed. EH conceptualized and wrote the manuscript. FS wrote and edited the manuscript. All authors contributed to the article and approved the submitted version.
With to date at least 4,440,000 cases of coronavirus infectious disease 2019 (COVID- 19) worldwide and estimated 302,000 deaths by June 2020, the pandemic caused by the new coronavirus SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) binds the attention of the international community and medical professionals all over the world [1] [2] [3] [4] [5] . As this pandemic continues to unfold across continents, data on the clinical characteristics and outcomes of COVID-19 patients are urgently demanded [6] . The clinical manifestation of COVID-19 ranges from mild and unspecific upper respiratory symptoms to severe courses with requirement of invasive mechanical ventilation and multiorgan failure [7, 8] . Mortality rates were reported with a wide range from 1% to up to 49% in elderly and comorbid patient cohorts [8, 9] . Risk factors for mortality include the presence of comorbidities such as hypertension, diabetes, Michael M. Kreusser and Philip W. Raake have contributed equally to this manuscript. chronic kidney disease, cardiovascular and chronic lung disease as well as obesity [6, 10, 11] . Patients after solid organ transplantation require a lifelong immunosuppressive therapy to prevent rejection episodes and may, therefore, be more vulnerable to COVID-19 [12] [13] [14] . Among those, heart transplant recipients have a particular high prevalence of comorbidities that have been established as risk factors for severe disease. Despite the widespread concern about the potential for high prevalence and severity of COVID-19 among heart transplant recipients, reliable data on heart transplant recipients with COVID-19 are missing so far aside from case reports and case series [13, [15] [16] [17] [18] . As transplant centers all over the world prepare for a rising incidence of the disease, knowledge about the clinical course, differences in disease susceptibility, clinical presentation and severity, and transplant-specific management of both antiviral therapy and immunosuppressant management are urgently needed. Here, we conduct a nationwide survey of all heart transplant centers in Germany and present the clinical characteristics of heart transplant recipients with COVID-19 during the first months of the pandemic in Germany. We performed a multicenter survey of heart transplant centers in Germany (24 centers) evaluating the current status of COVID-19 among adult heart transplant recipients (≥ 18 years of age). Information regarding COVID-19 and heart transplant recipients could be obtained from all heart transplant centers in Germany (24 centers). The study was performed in accordance with the ethical standards of the Declaration of Helsinki [19] [20] [21] . Written informed consent was routinely obtained from heart transplant recipients allowing the clinical and scientific use of data. Data were extracted from electronic and non-electronic medical records. Patients with COVID-19 were diagnosed with a positive test result via reverse-transcriptase polymerase chain reaction (RT-PCR) of nasopharyngeal swab specimens or with typical symptoms and abnormal chest computed tomography (CT) with atypical pneumonia including bilateral infiltrates. RT-PCR can return falsely negative results in individuals with COVID-19 and CT sensitivity has been demonstrated to be superior to RT-PCR [22, 23] . Characterization of patients included recipient data, principal diagnosis for heart transplantation, immunosuppressive therapy, concomitant medication, symptoms, electrocardiogram (ECG), imaging results (echocardiography and chest CT), laboratory findings, treatment and disease management, as well as follow-up data. For further analysis, patients were stratified into severe and non-severe course of COVID-19. Severe course of COVID-19 was defined as need for invasive mechanical ventilation. Comparison between groups (severe and non-severe form of was performed by Student's t test/Mann-Whitney U test or chi-squared test/Fisher's exact test, as appropriate. Data were expressed as mean ± standard deviation (SD) or as count (n) with percentage (%). A p value of < 0.05 was considered statistically significant [24, 25] . A total of 21 heart transplant recipients diagnosed with COVID-19 could be identified across all heart transplant centers in Germany in a period between March and June 2020. Three patients were treated at community-based hospitals, whereas 18 The most common reported symptoms of COVID-19 were dyspnea (85.7%), cough (76.2%), and myalgia/fatigue (76.2%), followed by rhinitis (66.7%) and fever (66.7%). A minority of patients presented with diarrhea (28.6) or reported pain (23.8%). Only one patient had reported anosmia or loss of taste (4.8%). No patient showed an impairment of left ventricular ejection fraction, but six patients (28.6%) developed a reduced right ventricular (RV) function in the further course in addition to an elevated systolic pulmonary artery pressure (28.6%) and a moderate-to-severe tricuspid regurgitation (19.0%). Sixteen patients (76.2%) had an abnormal chest CT with atypical pneumonia including bilateral infiltrates. Four patients developed ECG abnormalities during their hospital stay (two patients had new-onset atrial fibrillation and two patients had a non-sustained ventricular tachycardia) and four patients had new thromboembolic events (two patients with new deep vein thrombosis and two patients with new pulmonary embolism). No patient in this study had atrial fibrillation before the COVID-19 infection and no patient had an anticoagulant before the COVID-19 infection. The two patients with new-onset atrial fibrillation and the four patients with new thromboembolic events received full-dose unfractionated heparin. Patients with non-severe course of COVID-19 infection predominantly received a prophylactic anticoagulation, while patients with severe course received a full anticoagulation with unfractionated heparin or direct thrombin inhibitor. At the beginning of the pandemic, there were no established protocols for the treatment of heart transplant recipients with COVID-19. Hence, the decision for antibiotic and antifungal treatment was based on the experience of the local transplant team. Blood cultures were routinely taken at the time of admission but were mostly negative or showed unspecific findings (staphylococcus epidermidis or enterococcus faecium). All patients received antibiotic therapy ( In terms of the immunosuppressive drug therapy, drug trough levels of calcineurin inhibitors and mTOR inhibitors were slightly reduced depending on the period after heart transplantation. Drug doses were closely monitored and adapted accordingly. As the respective aims for the drug trough levels varied between centers, no further analysis was performed. Mycophenolate mofetil was suspended in half of patients (52.4%) and one patient on sirolimus was switched to tacrolimus (4.8%). Patients with severe course additionally received a pulse steroid therapy with a steroid dose of up to 200 mg to prevent pathological immune responses and to omit adrenal insufficiency. Six patients were on dialysis before (28.6%) and five additional patients (23.8%) required dialysis in the further course. Eight patients (38.1%) required invasive mechanical ventilation of whom seven deceased (87.5%). Extracorporeal life support (ECLS) was applied in three patients (14.3%) who after all passed away and intended in another patient who beforehand deceased as a result of septic shock. Clinical presentation and treatment are given in Table 2 . Laboratory values showed an increased mean high-sensitivity cardiac troponin T of 137.5 ± 113.3 pg/ml and an N-terminal prohormone of brain natriuretic peptide (NT-proBNP) of 9426.2 ± 12,835.1 ng/l. High-sensitivity troponin I instead of troponin T was measured in three out of 21 patients, however, as the assays are difficult to compare [26] , we did not include these data. Blood count revealed leucocytosis (12.0 ± 6.1/nl) with a high neutrophil count (83.4 ± 3.8%) and a low lymphocyte count (9.2 ± 5.4%). In addition, analysis of markers of inflammation showed a pronounced elevation of mean C-reactive protein (132.7 ± 109.0 mg/l), procalcitonin (5.9 ± 8.2 ng/ml), lactate dehydrogenase (635.6 ± 317.1 U/l), ferritin (2619.4 ± 2451.4 µg/l), and D-dimer (4.4 ± 3.2 mg/l). Laboratory findings are displayed in Table 3 . One of the latter patients with ventricular tachycardia was treated with hydroxychloroquine, which has been linked to pro-arrhythmogenic effects [27] . Only one patient with non-severe course had a reduced right ventricular function (7.7%). No one in this group displayed arrhythmias or had thromboembolic complications. Regarding kidney function, three patients with non-severe course (23.1%) and three patients with severe course (37.5%, p = 0.631) were on dialysis before COVID-19 infection. During COVID-19 infection, two additional patients with non-severe course (15.4%) and three patients with severe course (37.5%, p = 0.325) required dialysis in the further course. There were no statistically significant differences concerning creatinine (p = 0.148) or glomerular filtration rate between groups (p = 0.080). Comparison of markers of infection showed a significantly lower lymphocyte count (p = 0.013), and higher values for platelet count (p = 0.017), procalcitonin (p = 0.002), lactate dehydrogenase (p < 0.001), and D-dimer (p = 0.011) in patients with severe course. Furthermore, patients with severe course had significantly higher levels of high-sensitivity cardiac troponin T (p = 0.017) and NT-proBNP (p < 0.001). Data from comparison between patients with severe and non-severe course are given in detail in Table 4 . Taken all patients together, the severe form of COVID-19 occurred in 38.1% of heart transplant recipients with an underlying mortality of 87.5% in those patients. The overall mortality in this study was at 33.3% (7 out of 21 patients). All but 2 patients were hospitalized (90.5%) and 15 patients (71.4%) were at least temporarily treated on intensive care or intermediate care wards. At 30-day follow-up after COVID-19 diagnosis, 10 of 13 patients (76.9%) were discharged and 3 of 13 patients (23.1%) were still hospitalized in the non-severe disease group. In the severe course group, one patient already deceased one day after admission to hospital in septic shock, while further five patients passed away within 30 days. One patient in this group deceased 32 days after admission to the hospital after a prolonged intensive care unit stay. The last remaining patient in the severe course group is 60 days after hospital admission still on the intensive care unit requiring invasive mechanical ventilation at the time of preparing the manuscript. The COVID-19 pandemic severely impacts large parts of the world and the further development of this disease cannot be predicted. Patients after heart transplantation represent a particularly vulnerable patient population due to chronic immunosuppression, high rates of comorbidities and frequent contacts with medical professionals. We here present a multicenter study of COVID-19 among heart transplant recipients which represents a first nationwide survey of this disease in a solid organ transplantation cohort. Our data demonstrate an increased rate of the severe form of COVID-19 requiring invasive mechanical ventilation (38.1%) as well as a higher mortality (33.3%) compared to international cohorts of general populations [7, 28] . This is in line with findings from a mixed case series of 90 solid organ transplant recipients (46 kidney, 17 lung, 13 liver, 9 heart and 5 dual-organ transplants) by Pereira and colleagues [13] were a mortality rate of 17.8% (16 of 90 solid organ transplant recipients with COVID-19 deceased) was found, a case series of 36 kidney transplant recipients by Akalin and colleagues [12] with a mortality rate of 27.8% (10 of 36 kidney transplanted patients with COVID-19 deceased), and a case series of 28 heart transplant recipients in New York by Latif and colleagues [18] were the mortality rate was 25.0% (7 of 28 heart transplanted patients with COVID-19 deceased). However, it is remarkable that 13 of 21 patients (61.9%) after heart transplantation had a non-severe course despite continuous immunosuppression and high prevalence of comorbidities in our cohort. The effect of the underlying immunosuppressive drug therapy on the course of COVID-19 infection remains a matter of debate as in vitro data suggest an inhibition of viral replication by immunosuppressive drugs [29] [30] [31] [32] [33] , while long-term immunosuppression increases susceptibility to infection [12] . The elevated mortality rate in our study as well as in other studies with solid organ recipients rather implies a negative effect of the immunosuppressive drugs on the course of COVID-19 infection [12, 13, 18] . It, therefore, demands further research to better define the role of the immune response and its impact on outcomes in non-transplanted and transplanted (under immunosuppression) COVID-19 patients. Our data from the first months of the pandemic in Germany rather show a small number of heart transplant recipients with COVID-19 infection, given that between 250 and 350 heart transplantations are performed in Germany annually [34] . This assumption is underpinned by a recently published study of 87 heart transplant patients monitored during January and February in China, without a single COVID-19-positive patient [35] . This observation could be related to a particular awareness in transplanted patients who were commonly trained in infection prevention and hygiene measures already before the COVID-19 pandemic. However, our data relied on reports to the transplant centers and we observed that some of the patients are primarily treated in community hospitals. Thus, our data may be incomplete and may underestimate the prevalence of COVID-19 among heart transplant recipients. Of note, we found only 3 patients from northern and eastern Germany and 18 from southern and western Germany, reflecting the inhomogeneous distribution of SARS-CoV-2 infection in Germany [36, 37] . The clinical presentation of COVID-19 in our cohort did not differ from non-transplant patients or other solid organ transplant recipients, and this was depicted in other reports from immunosuppressed patients likewise [12, 13, [15] [16] [17] [18] . Typical symptoms in this study included dyspnea (85.7%), cough (76.2%), and myalgia/fatigue (76.2%), followed by rhinitis (66.7%) and fever (66.7%). Of note, only one patient (4.8%) had reported anosmia or loss of taste which has been described as an early sign of COVID-19 [38] . A specific treatment for COVID-19 remains unavailable; therefore, the initial clinical management is based on supportive care (antibiotic therapy, oxygen supply and supportive measures including intensive medical care, if needed). Pausing of mycophenolate mofetil as recommended in the "Guidance for Cardiothoracic Transplant and Ventricular Assist Device Centers regarding the SARS CoV-2 pandemic by the International Society of Heart and Lung Transplantation (ISHLT)" and switch from sirolimus to tacrolimus are further options in heart transplant recipients due to the specific pharmacological properties of mycophenolate mofetil and sirolimus [17] and these strategies were applied in the majority of patients. However, due to the small sample size of our study, effects of differential immunosuppressive and other therapeutic strategies cannot be judged and require further investigations in a larger cohort. Noteworthy, we observed RV dysfunction, elevated pulmonary artery pressures, and tricuspid valve regurgitation particular in patients with severe course and high mortality. We can only speculate whether this just reflects the invasive mechanical ventilation in these patients or if these observations are a result of potential thromboembolic complications induced by COVID-19 as proposed by others [39, 40] . Several studies have reported a high incidence of thromboembolic complications in patients with COVID-19 [1, [41] [42] [43] . Patients with COVID-19 and thromboembolic complications tend to be older, have lower lymphocyte counts, and have higher D-dimer levels [41, 43] . Severe course of COVID-19 infection can lead to sepsis and increased release of inflammatory cytokines which can promote coagulation activation and the occurrence of thromboembolic events [41] . As elevated D-dimer levels are a sign of excessive coagulation activation and hyperfibrinolysis, resulting thromboembolic complications may be associated with a poor prognosis in patients with COVID-19 [41] . In this study, 4 of 21 patients (19.0%) had new thromboembolic events (2 patients with new deep vein thrombosis and 2 patients with new pulmonary embolism). All four patients had a severe course of COVID-19 and were admitted to the intensive care unit. Therefore, in view of our findings and in accordance with other studies, pharmacological anticoagulation should be considered in patients with COVID-19 infection, especially in patients with severe course on the intensive care unit [1, [41] [42] [43] . Interestingly, COVID-19 was accompanied with an increase of cardiac biomarkers, high-sensitivity cardiac troponin T and NT-proBNP. All patients demonstrated elevated values, but, a significantly higher level of both biomarkers was found in patients with severe course of COVID-19 requiring invasive mechanical ventilation and consequent high mortality. Impaired outcomes in non-transplant patients with elevated cardiac troponins were shown likewise in early reports from China [28, 44, 45] . Multiple pathomechanisms for this observation were discussed, including an imbalance of oxygen demand and supply, direct myocardial injury by viruses or cytokines, or precipitating plaque rupture and a prothrombotic state leading to myocardial infarction [46] . Similarly, elevated NT-proBNP was linked to adverse outcomes, although postulated cut-offs for elevated risk (88.6 pg/ml in the study by Gao et al. [47] ) were far from the values we found in our patient cohort with a mean of 9426.2 ng/l. However, elevated cardiac biomarkers, arrhythmias, thromboembolic events, and RV dysfunction in patients with severe course of COVID-19 in our study, point to the importance of a careful cardiovascular monitoring of patients after heart transplantation when infected with COVID-19. The present study was conducted as a multicenter survey of all heart transplant centers in Germany (24 centers). However, although all heart transplant centers provided information regarding COVID-19, some patients might have been treated at community hospitals without the knowledge of the related heart transplant centers. Furthermore, we could only include patients who presented at medical facilities, neglecting patients with mild or subclinical course of COVID-19 who were not diagnosed. Moreover, the small number of patients limits the conclusions that can be drawn from our data. Our data demonstrate that within the first months of the pandemic in Germany, COVID-19 among heart transplant recipients was only rarely reported. However, when patients are affected, mortality is higher than in the general population and excessively increases when mechanical ventilation is needed. Attention to right ventricular dysfunction, arrhythmias, thromboembolic events, as well as to elevated cardiac biomarkers may be useful in the clinical management of COVID-19 in patients after heart transplantation. Given the increased mortality in our patient cohort, we would like to emphasize the importance of infection prevention, hygiene regulations and careful clinical assessment in this vulnerable patient population. Acknowledgments Open Access funding provided by Projekt DEAL. Funding This work was supported by research grants from the German Cardiac Society (Research Scholarship to RR) as well as by the Faculty of Medicine, University of Heidelberg (Physician-Scientist-Program Scholarship to RR). Conflict of interest The authors report no conflicts of interest in this work. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
The Novel Corona Virus (COVID-19) started from Wuhan, China and thus, initially known as the Wuhan virus, expanded its circle in South Korea, Japan, Italy, Iran, USA, France, Spain and finally spreading in India. It is named as novel because it is never seen before mutation of animal corona virus but certain source of this pandemic is still unidentified. It is said that the virus might be connected with a wet market (with seafood and live animals) from Wuhan that was not complying with health and safety rules and regulations. The pandemic COVID-19 have been recorded over 200 countries, territories, and areas with about 3000000 confirmed cased and 200000 deaths (WHO). The COVID-19 is very similar in symptomatology to other viral respiratory infections. As it is novel virus, the specific modes of transmission are not clearly known. Originally it is emerged from animal source then spread all over the world from person to person. Initially, there has been speculation about the virus spreading while the carrier (infected person) is not showing any symptoms, but that has not been confirmed as a scientific fact (Kachroo, 2020) . On 11 March 2020, WHO changed the status of the COVID-19 emergency from public health international emergency to a pandemic? Nonetheless, the fatality rate of the current pandemic is on the rise (between 2-4 percent); relatively lower than the previous SARS-CoV (2002/2003 ) and MERS-CoV (2012) outbreaks (Malik et al., 2020) . Thus, COVID-19 has presented an unprecedented challenge before the entire world. Symptoms of COVID-19are reported as cough, acute onset of fever and difficulty in breathing. Out of all the cases that have been confirmed, up to 20% have been deemed to be severe. Cases vary from mild forms to severe ones that can lead to serious medical conditions or even death. It is believed that symptoms may appear in 2 to 14 days, as the incubation period for the novel corona virus has not yet been confirmed. However, in India 14 days minimum quarantine period is declared by Government for suspected cases. Since it is a new type of virus, there is a lot of research being carried out across the world to understand the nature of the virus, origins of its spreads to humans, the structure of it, possible cure / vaccine to treat COVID-19. India also became a part of these research efforts after the first two confirmed cases were reported here on January 31, 2020. Then in India screening of traveler at airport migrant was started, immediate Chinese visas was canceled, and who was found affected from COVID-19 kept in quarantine centers (Ministry of Home Affaires Government of India, Advisory). In continuation, we take a look at few of the interesting and important research being carried out in India with respect to COVID-19. ICMR, India claims that SARI patients with no record of international travel or contact with infected persons tested positive for COVID-19. Hence it is important to optimize testing by developing strategies to identify potential cases that have a higher chance of being infected. Since the availability of the resources like testing kits, labs, health personnel etc. is limited in India as for as concerned the population, the most practical approach is to test symptomatic patients presenting to hospitals, hotspots and aggressive testing to identify and contain local chains of transmission. In absence of a definite treatment modality like vaccine, physical distancing has been accepted globally as the most efficient strategy for reducing the severity of disease and gaining control over it (Ferguson, 2020; Singh et al., 2020) . Also in India it is reported that the country is well short of the WHO's recommendations of minimum threshold of 2.28 skilled health professionals per 1000 population (Anand et al., 2016) . Therefore, on 24 March 2020, the Government of India under Prime Minister Narendra Modi Ji ordered a nationwide lockdown for 21 days, limiting movement of the entire 1.3 billion population of India as a preventive measure against the COVID-19 pandemic in India. It was ordered after a 14-hour voluntary public curfew on 22 March. The lockdown was placed when the number of confirmed positive corona virus cases in India was approximately 500. On 14 April, Prime Minister of India extended the nationwide lockdown until 3 May, with a conditional relaxation after 20 April for some regions. On 1 May, the Government of India again extended the nationwide lockdown further by two weeks until 17 May. Also, the Government has divided the entire nation into three zones viz. green, red and orange with relaxations applied accordingly. There are already various measures such as social distancing, lockdown masking and washing hand regularly has been implemented to prevent the spread of COVI-19, but in absence of particular medicine and vaccine it is very important to predict how the infection is likely to develop amongst the population that support prevention of the disease and aid in the preparation of healthcare service. This will also be helpful in estimating the health care requirements and sanction a measured allocation of resources. It is well known fact that COVID-19 has spread differently in different countries, any planning for increasing a fresh response has to be adaptable and situation-specific. Data obtained on COVID-19 outbreak have been studied by various researchers using different mathematical models ( Most of pandemics follow an exponential curve during the initial spread and eventually flatten out (Junling et al., 2014) . SIR model is one of the best suited models for projecting the spread of infectious diseases like COVID-19 where a person once recovered is not likely to become susceptible to the infection again (Kermack & McKendrick, 1991) . Susceptible-Infectious-Recovered (SIR) compartment model (Herbert, 2000) is used to include considerations for susceptible, infectious, and recovered or deceased individuals. These models have shown a significant predictive ability for the growth of COVID-19 in India on a day-to-day basis so far. A recent study by Mandal et al., 2020 has shown that social distancing can reduce cases by up to 62 percent. Further, time series models have been employed for predicting the incidence of COVID-19 disease. As compared to other prediction models, for instance support vector machine (SVM) and wavelet neural network (WNN), ARIMA model is more capable in the prediction of natural adversities (Zhang et al., 2019) A time dependent SIR models have been defined to observe the undetectable infected persons with COVID-19 (Chen et al., 2020). Chatterjee et al., 2020 studied a stochastic mathematical model of the COVID-19 epidemic in India. The logistic growth regression model is used for the estimation of the final size and its peak time of the corona virus epidemic in many countries of the World and found similar result obtained by SIR model (Batista, 2020) . It is well known that the effects of social distancing become visible only after a few days from the lockdown. This is because the symptoms of the COVID-19 normally take some time to come out after getting infected from the COVID-19. The peak infection is reached at the end of June remix, or adapt this material for any purpose without crediting the original authors. preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, reuse, The copyright holder has placed this this version posted May 16, 2020. . 2020 with in excess of 150 million infective in India and the total number infected is estimated to be 900 million . Other estimates indicates that, with hard lockdown and continued social distancing, the peak total infections in India will be 97 million and the number of infective by September is likely to be over 1100 million (Schueller et al., 2020) . Due to the recent development of this pandemic, we are interested in addressing the following important issues about COVID-19: 1) What is the expected time to stop new corona cases? 2) What is the expected maximum number of corona cases? 3) The significance of lockdown. In this paper, instead of developing a mathematical model for the pattern of spread of COVID-19, an attempt has been made to resolve these above issues in India. Let us define a function called tempo of disease that is the first differences in natural logarithms of the cumulative corona positive cases on a day, which is as: will continue a week then we can assume no new corona cases will appear further. In the initial face of the disease spread the tempo of disease increases but after sometime when some preventive majors is being taken then it decreases. Since t r is a function of time then the first differential is defined as Where t r denotes the tempo that is the first differences in natural logarithms of the cumulative corona positive cases on a day, T r is the desired level of tempo i.e. zero in this study, t denotes the time and k is a constant of proportionality. Equation 1 is an example of an ordinary differential equation that can be solved by the method of separating variables. The equation 1 can be written as remix, or adapt this material for any purpose without crediting the original authors. preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, reuse, The copyright holder has placed this this version posted May 16, 2020. Where A=e C . This equation 4 is the general solution of equation 1. If k is less than zero, equation 4 tells us how the corona positive cases will decreases over the time until it reaches zero. Value of A and k is estimated by least square estimation procedure using the data sets. The paper used series of daily cases from the website corona19india.org. In this study the day wise cumulative number of corona positive cases from April 1 to May 10, 2020 has been used to know when the tempo of disease will become zero and what will be the size at that time. Also an attempt has been done to understand the significance of lockdown with the help of variation in tempo of disease during the various lockdown periods. The Government of India implemented lockdown on 24 th March, 2020 and expected that the tempo of disease is decreasing. We have analyzed India data along with some states such as Uttar Pradesh, Madhya Pradesh, Rajasthan, Bihar, Maharashtra, Gujarat, Delhi, Punjab, West Bengal, Tamil Nadu, Telangana, Karnataka, Andhra Pradesh and Kerala. In Punjab the recovery rate is very low however in Karnataka the percent confirmed among total test is very low means either the disease prevalence is low or the quality of testing kit is not good. In Gujarat and Delhi it is 7.4 and 7.9 percent respectively. These remix, or adapt this material for any purpose without crediting the original authors. preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, reuse, The copyright holder has placed this this version posted May 16, 2020. . https://doi.org/10.1101/2020.05.15.20103325 doi: medRxiv preprint percentages are very low because the testing are done in the hotspots only if the population of hotspot only is considered that these percentages might be more. Table 2 gives the estimates of time to stop new corona cases and maximum number of corona case in selected states of India. It has been observed that in Kerala and Telangana, there will be no new case of corona virus by end of May and mid June with expected maximum cases 600 and 2000 respectively. Whereas in Uttar Pradesh, Madhya Pradesh, Rajasthan, Karnataka, Punjab, West Bengal, Bihar and Andhra Pradesh, there will be no new case of corona virus by the end July with expected maximum cases 11000, 18000, 9000, 3000, 4000, 16000, 4000 and 4000 respectively. In Delhi, the virus will continue till the mid August. In Maharashtra, Gujarat and Tamil Nadu the pandemic will continue till the end of August with expected number of cases 115000, 45000 and 35000 respectively. It is expected that the COVID-19 virus will disappear in India more or less by the end of August with the maximum number of cases about 350000. Government suggested and implemented social distancing and lockdown to control the spread of COVID-19 in the society. In Table 3 an attempt has been made to show the summary statistics of tempo of COVID-19 t r during various lockdown periods in India. It is observed that average tempo is maximum (0.167 with standard deviation 0.062) in the period prior to the lockdown. During the first lockdown period the average tempo is 0.140 with standard deviation 0.044 however in lockdown 2 it is 0.070 with standard deviation 0.012, thus it is clear that both average and standard deviation are decreasing. Table 4 represents the results of ANOVA testing for mean of t r during various lockdown periods which is significant means that the average tempo of COVID-19 is significantly different is various periods considered. A group wise comparison of the average tempo of COVID-19 t r during various lockdown periods is shown in Table 5 which reveal that lockdown is significantly affects the spread of COOVID-19. Figure 1 show that tempo of disease t r is declining towards zero with time, more rapidly in Kerala and Telangana than other states. Whereas, in rest of the states it is declining slowly towards t r = 0. COVID-19 has been declared as pandemic by WHO and is currently become a major global threat. Prediction of a disease may help us to understand the factors affecting it and the steps that we can take to control it. The Government of India has taken preventive measures such as complete lockdown in the very early stage of disease, physical distancing and case isolation. The most important issue is that many healthcare professionals are visiting each and every household in the hotspot area across the country to trace and isolate infected persons to curtail the spread of disease. In order to support the prevention of the disease and aid to the healthcare professionals, an attempt has been made to develop a simple model for the prediction of confirmed COVID-19 cases and to utilize that model for forecasting future COVID-19 cases in India. As per the model forecast, the confirmed cases are expected to gradually decrease in the coming weeks. It is also likely that the efforts such as lockdown and physical distancing affect this prediction start to decline. On the basis of considered data, one can predict that the final size of corona virus pandemic in India will be around 350000 by the end of August. The exponential model used in this study is a data driven model. Thus, its forecasts are as reliable and can capture the dynamics of the pandemic. Due to real time change in data daily, the predictions will accordingly change. Hence, the results from this paper should be used only for qualitative understanding. preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, reuse, The copyright holder has placed this this version posted May 16, 2020. . https://doi.org/10.1101/2020.05.15.20103325 doi: medRxiv preprint
The coronavirus disease (COVID- 19) pandemic has dramatically impacted all aspects of healthcare delivery (1) . There is widespread concern that increased clinical demands due to the virus will outstrip available resources. Much attention has been focused on how to view these suddenly urgent issues of distributive justice through the established lens of public health ethics (2) . Most discussions on this subject have focused on how to prioritize and ration selected resources, namely, personal protective equipment, intensive care unit (ICU) beds, and ventilators (3) . Although these are indeed critical conversations, the pharmaceutical drug supply, historically threatened, remains incredibly vulnerable at this time (4) . Indeed, providing care to those who are critically ill with or without COVID-19 presupposes the availability of essential medications to treat their pain, sedate them, address secondary infections, and maintain their blood pressure. Drug shortages represent an ongoing public health crisis that predates COVID-19. The unavailability of life-saving medications engenders incremental expenses, patient harm, and increased medical errors, causing widespread trepidation in oncology, critical care, infectious disease, and innumerable other settings (5) . A recent U.S. Food and Drug Administration (FDA) report summarizes and contextualizes the underlying root causes and potential solutions, highlighting economic drivers as the primary cause of drug shortages (6) . A recent legislative report suggested incremental steps for mitigation (7) . The current pandemic has caused disruptions to domestic and international supply chains, as well as globally increased demand for medications, further straining an already broken system. Although the federal government and various groups are continuing to work on potential solutions (8) , the impact at the bedside will be formidable, and its scope remains as uncertain as the evolution of the pandemic itself. Herein, we provide guidance for clinicians and the institutions tasked with preventing, mitigating, and managing potential scarcities of essential medications in the current pandemic. Formulating a plan and response to impending drug shortages requires information. Given that drug shortages have been a reality for the past decade, pharmacists and health systems have become adroit at monitoring and responding to them; in fact, it has even become a component of pharmaceutical training (9) . Much of this information is available online in formats that are easily synthesized by institutions and clinicians. Both the FDA (10) and the American Society of Health-System Pharmacists (ASHP) (11) maintain dynamic databases of current drug shortages, and these resources can be invaluable. Independent healthcare companies may also provide guidance and data regarding how specific drugs are impacted in real time (12) . Regional communication can determine how local supply chains are impacted, and potential coordination and sharing mechanisms are also critical (13) . Ideally, information sharing should occur via a central repository or clearing house. For example, in many states, the local government requires individual health systems to report the number of ventilators available and reserves the right to reallocate these ventilators to communities and hospitals in need. Similarly, at the federal level, the Department of Health and Human Services is responsible for allocating the limited supplies of remdesivir to individual states. Although this process has been far from perfect, this model of distribution holds promise and should not be abandoned. Sharing information is an important first step; the second and more difficult step involves actual sharing of medications across hospitals and health systems. Despite calls to allow such care coordination (14) , barriers remain, including the need for cooperation by competing health systems, concerns about potential liability, and legal regulations that affect the transfer of drugs. In the state of Maryland, in an effort to promote uniform and consistent prioritization of scarce resources (e.g., ventilators, ICU beds, and medications), competing hospital systems have aligned to create an agreed-upon joint allocation framework. Importantly, such an approach assures the public that allocation will occur in a thoughtful, transparent, and fair manner (15) . In the COVID-19 era, efforts to silo information, as well as manpower, pose a real threat. Thus, in this time of crisis, it is critical to rely upon and expand these resources and networks. Many larger institutions maintain dedicated resources to identify and mitigate shortages, yet may still struggle to communicate real-time information across service lines and disciplines. Smaller institutions may find it easier to communicate, but these organizations may lack resources, with clear implications for patients, further aggravating disparities in access to basic and critical medications. Given the need for rapid redeployments and massive changes in manpower assignments, ensuring that increased efforts focus on responses to drug shortages will be critical. It will be equally important to facilitate communication between pharmacists (those tasked with maintaining supplies, as well as those embedded with clinical teams) and clinical teams about how supplies may impact care delivery. Evidence-based preservation of drugs that are in limited supply, even before critical shortages occur, is a necessary component of a cohesive rationing strategy. Often informed by the pharmacists serving within an interprofessional group (16) , critical care providers are all too familiar with shortages of medications that are an essential part of their day-to-day management, and thus are accustomed to improvising in selected circumstances. Shortages of parenteral opioids and smallvolume saline have similarly required workarounds and alternatives. Proactively implementing some of these strategies even before a critical shortage occurs is of value, especially given the disruption of supply chains that may engender shortages with even less notice than before COVID-19 (17) . Pandemic-era strategies for conserving commonly used critical care agents at risk of being in short supply are presented in Table 1 , recognizing that these shortages are often regional and unpredictable, and intensive care protocols and strategies are highly individualized (18) . As another example, although intravenous solutions are liberally administered in acute care settings (19) , novel strategies that can safely maintain fluid balance while conserving resources are worth considering (20) . Anesthesia providers are also adept at selecting alternative regimens during shortages. As organizations attempt to balance critical and elective surgeries with current or presumptive planned needs, flexible anesthetic and sedation techniques will be vital. Scarce-resource allocation committees are being engaged at many institutions to manage anticipated critical shortages related to COVID-19, in many cases informed by statewide guidance (21) . However, many of these committees may be focusing on ventilators, ICUs, and other specific highticket resources. We call upon all stakeholders, from governments to clinicians, to refocus some of these efforts on essential medications. Established workflows and rationing criteria that predate COVID-19 can provide clear prioritization schema for scarce medications that take into account ethical, logistical, and legal factors (22) (23) (24) . Many of these will need to be updated and amended to be applied appropriately to the current pandemic. This relates to the types of shortages we anticipate, as well as to the reality of medical practice in the midst of a pandemic. As one component of this effort, pharmacists and institutional scarce-resource allocation groups will need to transparently consider the triggers to formally consider a drug supply threatened, limited, or subject to rationing. The lines between routine care, evidence-based conservation, and rationing are important. There is a lack of consensus between the FDA and ASHP regarding the definition of a drug shortage, with each defining the threshold for a drug shortage differently. The first step in addressing drug shortages is to agree on an accepted and common definition. Because the ASHP's definition of "drug shortage" is broader in scope, we prefer its approach. Given the unique nature of local supply chains and distribution systems, arguably, individual hospitals and health systems will make Ensure appropriate sedation and pain control before initiating VIEWPOINTS different decisions regarding mitigation and conservation strategies. Irrespective of which approach is used, the need to alter the standard of care must be discussed openly with patients. In fact, in support of transparency, some have argued that hospitals should publicly post a notice when they are faced with drug shortages (25) . Even if there are sufficient ventilators, a critical shortage of sedatives, paralytics, and/or opioids will obviate the ability to safely keep patients intubated, and data suggest that these shortages have already been associated with inadvertent extubations (26) . Moreover, shortages of vasopressors and inhalers may limit clinicians' ability to manage critically ill patients regardless of disease state or respiratory status, and will need to be incorporated more explicitly into rationing schema. Scarce-resource allocation teams must also consider the understandable yet nonetheless troubling rush to adopt putative treatments for COVID-19, such as hydroxychloroquine and azithromycin, among many others, despite a lack of proof of safety or effectiveness (27, 28) . In the case of hydroxychloroquine, hoarding has prompted shortages, jeopardizing the well-being of patients for whom hydroxychloroquine is a proven intervention. Once viable and effective treatments and/or vaccines for COVID-19 are available, prioritizing nascent supplies will present a formidable ethical and logistical challenge, albeit one that will depend on unknown clinical and logistical factors (such as who stands to benefit the most, oral vs. parenteral dosing, among a litany of others). The initial experience with remdesivir is a deeply troubling harbinger (29) . Although it was beyond the scope of this paper, in the coming days and months, this matter will demand global attention. Those in charge of institutional responses to a pandemic must integrate with other individuals, taking into account extant resources, to determine how best to plan for these eventualities. Moreover, ensuring that such plans are shared broadly with all stakeholders, ranging from clinical pharmacists to hospital executives, policymakers, and beyond, will be critical to enable a response to a critical shortage in real time, and adjust clinical workflows and appropriate prioritizations accordingly. COVID-19 has upended an already vulnerable medication supply chain and risks engendering devastating shortages of life-saving drugs for patients, regardless of whether they suffer from this virus. Clinicians and the institutions for which they work will need to communicate at local, regional, and national levels to appropriately respond. Whenever feasible, they will need to use the best available evidence to conserve existing supplies and they will need to plan for contingencies, such as how to prioritize patients in the event of a critical shortage. Only with clear lines of communication and a proactive, collaborative approach can we weather this impending storm. n Author disclosures are available with the text of this article at www.atsjournals.org.
At present, people around the world are coping with the COVID-19 pandemic which is a global health crisis that has already claimed the lives of over 1.1 million people. We are all experiencing a prolonged stress situation that has most, if not all, of the elements that create conditions for significant and persistent distress. Most accounts of the stress and challenges involve a focus on the uncertainty and the threat to health and well-being of the individual person and her or his friends and family members. Stress also is being experienced by the significant disruption to people's daily routines and lives in general. Some of this stress and distress comes from people learning things about themselves that perhaps they never wished to learn as those individuals who thought they were resilient find out that they are actually quite vulnerable. Of course, there are also significant economic difficulties being experienced worldwide as businesses are lost and millions of people around the world are unemployed for the first time in their lives, and there is profound economic anxiety (see Bareket-Bojmel et al. 2020) . Finally, when people are upset, they turn to other people for comfort and support, yet it is prescribed by health officials and government leaders that this must be done at a distance and online in most instances. Attempts to stop the spread and transmission of COVID-19 are centered primarily on engaging in physical distancing. This practice has been called social distancing to reflect the fact that people are to stay physically apart from other people. This necessity amounts to a form of separation that is believed to significantly escalate feelings of isolation in ways that may contribute to unprecedented levels of loneliness. The research described below was inspired largely by our interest in examining loneliness and the factors that are associated with it. Clearly, by all accounts, the current pandemic represents a strong, evolving situation that needs to be understood on multiple levels. It seemed imperative to us to try to understand it from a personality and individual differences perspective. The vast majority of research investigations in the personality field do not take the situation into account despite classic calls for a joint focus on personality and the situation and evidence of the utility of this approach (see Endler and Magnusson 1976; Endler and Parker 1992; Magnusson and Endler 1977; Mischel and Shoda 1995) . It has been suggested that attempts to understand personality vulnerability factors and the experience of stress and distress must be extended to include an emphasis on the personality-situation interaction (see Flett et al. 1995) . Flett and associates (Flett et al. 1995) observed the need for personality research that incorporates a detailed analysis of contextual factors in the environment that impact behavior (also see Coyne and Whiffen 1995) . An observation made 15 years ago still applies. That is, it was noted that, "Although the benefits of such an approach have been acknowledged widely, it is generally the case that most personality studies do not attempt to examine personality factors within the context of concurrent situational factors" (Flett et al. 1995, p. 316) . The present research reflects this emphasis and includes an explicit focus on individual differences in dependency and self-criticism as described by Blatt and his colleagues (e.g., Blatt and Zuroff 1992) . Both self-criticism and dependency reflect a negative model of the self but for different reasons. According to Zuroff (1992, 2002) , self-criticism entails an introjective orientation with a focus on achieving personal goals and competing successfully with others being highly competitive. Self-critical people engage in harsh self-scrutiny and gain little satisfaction from their accomplishments due to a perfectionistic orientation that can become highly individualistic and may distance them from other people. Alternatively, dependency reflects unrequited interpersonal needs and an anaclitic orientation that involves a preoccupation with other people and requiring them to keep in close proximity. Dependent individuals see themselves as being helpless and weak on their own and often have abandonment fears, which could be activated by being socially isolated during the pandemic. In essence, dependency reflects a personality orientation underscored by a need for relatedness and association with significant others, whereas self-criticism reflects a personality orientation guided by a need for self-definition, individualization, and personal identity (see Blatt and Blass 1996) . There is now a voluminous research literature that attests to the vulnerability of the self that is inherent in self-criticism and dependency, especially within the context of daily life stress (Dunkley et al. 2003) , but most of this research has been conducted without taking the situational or life context into account. However, there are some noteworthy exceptions (e.g., Casalin et al. 2014; Sharhabani-Arzy et al. 2005) and some clear examples of why and how contexts matter. For instance, evidence of the need for a stress threshold model emerged from a prospective study of self-criticism and dependency in postpartum adjustment among women with high-risk versus low-risk pregnancies (see Besser et al. 2007) . Other research examined self-criticism and dependency among three samples of participants (i.e., current undergraduate students, recently graduated students, and chronic pain patients) who had been exposed to missile attacks while living in the southern region of Israel (see Lassri et al. 2013) . Analyses confirmed the presence of a significant interaction effect of self-criticism and terrorism-related stress predicting elevations in levels of general psychopathology. Our central emphasis on dependency and self-criticism is due, in part, to the relevance of these personality orientations during a stressful time when highly impactful circumstances should elicit and activate concerns about being separated from others and perhaps being critical of less than optimal reactions to this public health crisis. It also reflects conceptualization and evidence highlighting that these orientations can operate as both traits and in a more state-like manner; it has been emphasized that these personality orientations become more accessible as a result of current moods and social contexts (see Zuroff et al. 1999; Zuroff et al. 2016 ). Our focus on adaptability reflects how closely suited the adaptability construct is to the challenges that face people through the pandemic. Adaptability has been conceptualized and defined as "… the capacity to constructively regulate psycho-behavioral functions in response to new, changing, and/or uncertain circumstances, conditions, and situations" (Martin et al. 2013, p. 728) . Adaptability is seen as being required and called for at various points throughout a person's lifespan. Although it can seem similar, adaptability is distinguished from resilience (see Martin 2017) . Our focus on adaptability and the stress of the COVID-19 pandemic is in keeping with Selye's (1993) observation that "… all demands upon our adaptability do evoke the stress phenomenon" (p. 7). The current research was based on a modified version of adaptability that was intended to specifically capture adaptability in response to the pandemic. Martin and associates (Martin et al. 2013 ) developed the Adaptability Scale. This nine-item self-report inventory reflects an emphasis on general adaptability in terms of abilities to modify cognition, behavior, and emotion. We determined that small modifications would result in a measure that would tap into meaningful individual differences in adaptability among students who were undergoing and required to cope with the COVID-19 global health pandemic. The use of this measure while the current pandemic and all of its challenges were still unfolding seemed particularly appropriate given that all respondents were required to respond to items that reflected a concept and an orientation that was highly relevant to their current daily lives. An overarching premise guiding our investigation was our contention that whether a person is able to develop a high level of adaptability is a reflection, in part, of whether they have a positive self-definition. Moreover, when adaptability is put into practice and yields positive outcomes, that should further add to a positive self-concept. Conversely, people with a negative self-concept will lack the sense of efficacy and capability that promotes the feeling and the actual ability to adapt. Accordingly, this study included various individual difference measures to reflect the presence of either a positive self or a negative self. This included personality factors associated with a negative sense of self (i.e., dependency and self-criticism) and a positive sense of self (i.e., self-esteem and mattering) as well as cognitive measures that also varied in their valence (i.e., positive automatic thoughts and negative automatic thoughts). This distinction was also reflected in our outcome measures tapping not only current adjustment by assessing stress, distress, and negative emotions but also the current experience of positive emotions. Self-esteem and mattering were included as another main focus given general evidence of the benefits of self-esteem and mattering in adapting to life challenges (see Flett 2018) . A series of analyses have underscored a sense of mattering to others as being vital in helping people adjust to the stress and psychosocial challenges arising out of being physically isolated and feeling socially isolated (see Casale and Flett 2020; Flett and Heisel 2020; Flett and Zangeneh 2020) . These analyses-as well as work with the Anti-Mattering Scale (see Flett 2018)-further highlight that just as mattering is protective as a key element of self-worth experiences, marginalization experiences and feelings of not mattering contribute to feelings of stress and distress. The self-esteem and mattering concepts were highlighted in work by Morris Rosenberg (see Rosenberg 1965 Rosenberg , 1979 Rosenberg and McCullough 1981) . Mattering is the feeling of being important and significant to others and it is particularly relevant during times of transition (see Rosenberg and McCullough 1981) . Mattering involves having value to others and giving value to others (Prilleltensky 2020) . Research has established that elevated levels of mattering have strong negative associations with loneliness, self-criticism, and self-hate (Flett et al. 2020a; Flett et al. 2016; Joeng and Turner 2015) . Given the emphasis throughout the pandemic on separation and the ability of people to cope with physical separation and social separation as well as evidence of substantially elevated levels of loneliness (see Kilgore et al. 2020) , we recognized the current situation as a highly relevant context for potentially gaining new insight and understanding of the nature of loneliness and its correlates. Our interest in the associations between personality traits and loneliness goes back to earlier research on self-criticism, dependency, and loneliness (see Besser et al. 2003) . Collectively, the results of various studies converge to suggest that both self-criticism and dependency are associated with elevated levels of loneliness, but stronger associations are found between self-criticism and loneliness (e.g., Besser et al. 2003) . These associations merit further consideration within the context of a situation that may be making feelings of loneliness both stronger and more salient. We sought to potentially gain new insights with an approach to loneliness that is unique and reflects our interest in understanding current experiences of loneliness from a state perspective. Specifically, we supplemented a standard measure of loneliness by also including a new automatic thoughts measure of loneliness that reflects the premise that some people are thinking frequently about their feelings of loneliness and their perception of being unable to control or escape these feelings of loneliness. These feelings are clearly on display or at least alluded to in case examples of people who suffer from chronic feelings of loneliness. These feelings of loneliness are often mentioned when people are interviewed and asked to indicate how they would cope with loneliness if they were put into situations of physical and social isolation for a protracted period of time (see for instance, the BBC Desert Island Discs podcasts). Our emphasis on this element reflects our sense that physical isolation during the pandemic has resulted in many people having overwhelming feelings of anxiety that may have resulted in a cognitive preoccupation to the point that some people can no longer stand the thought of being lonely for any significant stretch of time. Moreover, from a social science perspective, while there has not been, to our knowledge, any research thus far on lonelinessrelated automatic thoughts, there have been several studies that have linked loneliness with the ruminative brooding that is known to prolong and exacerbate depression (e.g., Borawski 2019; Vanhalst et al. 2012; Zawadzki et al. 2013) . The notion that there is a cognitive element to loneliness that remains to be addressed represents a potentially important extension of the loneliness construct. This emphasis on the cognitive aspect of loneliness and the thoughts experienced during the pandemic is in keeping not only with analyses of loneliness and its clinical relevance (e.g., Heinrich and Gullone 2006) but also with conceptual descriptions of the need to consider selfcriticism and dependency from a cognitive perspective. Flett et al. (1995) issued a call for a conceptual and empirical focus on the cognitive aspects of dependency based on past suggestions. For instance, Blatt and Shichman (1983) proposed that the self-critical, introjective style and the anaclitic, dependent style involve a cognitive component that extends to excessive cognitive rumination about themes that are directly related to these personality traits. Accordingly, although it was not our primary focus, one clear hypothesis for the current study is that self-criticism and dependency will be associated with more frequent negative automatic thoughts and less frequent positive automatic thoughts. Moreover, and most notably, participants with elevated levels of dependency would be especially prone to experience automatic thoughts about the experiences of loneliness. In summary, the design of the current study was shaped by three primary considerations. First, we sought to examine the role of a positive versus a negative self-concept in coping with the current global health crisis by including measures, both distal and proximal, that reflect risk and vulnerability due to their typical links with negative elements of the self, versus resilience and adaptation due to their typical links with positive elements of the self. Second, we focused on the concept of adaptability since it seems so highly relevant to the challenges facing people who must contend with the COVID-19 pandemic. Finally, the current study was designed to illuminate the experience of loneliness and associated individual differences as people try to adapt to life during a global pandemic. As noted above, the current study included a focus on individual differences in dependency and self-criticism with a key component of this research being its focus on participants' reports of their adaptability to the pandemic. Regarding potentially protective factors, we also included multiple measures to investigate individual differences in feelings of mattering and not mattering as well as associated fears in line with recent analyses pinpointing mattering as a key protective resource that should help people withstand the isolation and uncertainties of the pandemic (see Casale and Flett 2020; Flett and Heisel 2020; Flett and Zangeneh 2020) . An email with a link to a secure online questionnaire was sent by the internal systems of five public higher education institutions in Israel (i.e., public academic colleges) to their undergraduate students requesting volunteers for a study concerning "personality and experiences of loneliness." This message was sent at the end of the 9th-10th week of social distancing and remote learning (i.e., about 75% of the way through the second semester of the academic year in Israel). Most students had at least one full semester of traditional face-to-face learning before the transition to online learning and would have spent most of the current semester isolated from campus (e.g., online synchronous and/or asynchronous learning). The secure online questionnaire included informed consent, demographic questions, information about personality characteristics, adaptability, mattering, loneliness, current levels of positive and negative emotions (i.e., in time of the pandemic), and general levels of these same emotions. A total of 899 students entered the website with 462 of those students actually completing the questionnaires (349 women [75.5%] and 113 men [24.5%]). Their mean age was 28.41 years (mode = 25.0; median = 26.0 years (SD = 8.59) and 41.1% of the participants were in their first academic year, 27.3% were in their second year, and 17.3% were in their third year. The academic majors of the participants were as follows: 42.4% social sciences, 23.8% sciences, 6.7% art, 2.6% law, 6.3% humanities, and 18.2% management studies. The sample consisted predominantly of participants who were single (71.9%), Jewish (86.8%), and currently unemployed or on forced/unpaid vacation due to the COVID-19 pandemic (59.5%). The self-reported current economic status of these participants was 5.6% "very good," 25.8% "good," 41.3% "moderate," 18.4% "not good," 5.2% "bad," and 3.7% "very bad." We decided that the sample size for this study should be at least 250 based on a power analysis (> .80) for the average effect size in social-personality psychology (r ≈ .21; Richard et al. 2003) in conjunction with the guidelines for reducing estimation error in social-personality psychology (N ≥ 250; Schönbrodt and Perugini 2013) but we deliberately oversampled in an effort to increase the statistical power of the study. Participation in this study was voluntary. Participants were aware that they could withdraw from the study at any time. All participants provided their signed, informed consent. No social security numbers or other identifying data were collected nor were any invasive examinations conducted. This project was conducted with the approval of the Ethics Committee (IRB) of Hadassah Academic College. Adaptability Adaptability was measured using a modified nine-item version of the Adaptability Scale (Martin et al. 2013) . The modifications involved slightly rewording each of the nine items of the Adaptability Scale to focus on the COVID-19 pandemic situation rather than the decontextualized form of adaptability that was the focus of the original instrument. For example, the item "I am able to think through a number of possible options to assist me in a new situation" was altered to read "I am able to think through a number of possible options to assist me in this new situation." Each item from the Adaptability Scale was designed to reflect the following criteria: (a) appropriate cognitive, behavioral, or affective adjustment in response to (b) uncertainty and/or novelty that has (c) a constructive purpose or outcome. Martin and associates (Martin et al. 2013) advised that adaptability can be operationalized as a higher-order factor (indicated by a cognitivebehavioral factor [six items] and an affective factor [three items]) or as a first-order factor (indicated by nine items). In the interest of parsimony, we adopted the latter operationalization and focused on the nine-item composite score for adaptability (α = .92). Participants were asked to rate their level of agreement with each item using scales that ranged from 1 (strongly disagree) to 7 (strongly agree). This instrument has been shown to demonstrate adequate psychometric properties (e.g., Martin et al. 2013) . We found in another investigation with an independent sample of over 1200 students from Israel that this modified version of the instrument that was focused on adaptability to the pandemic also had an internal consistency of .92 (see Besser et al. 2020) . Self-Criticism Self-criticism was assessed with a six-item measure based on items that Shahar and associates (Shahar et al. 2008 ) selected from the 66-item Depressive Experiences Questionnaire. Various authors have utilized this brief scale (e.g., Zuroff et al. 2016 ). This six-item self-criticism measure has items such as "Often I find I do not live according to my standards or ideals" and "I have a tendency to be very self-critical." All six items are worded positively to reflect self-criticism. Items were rated using a scale that ranged from 1 (not at all) to 7 (very much). The internal consistency of this subscale was .80 in the original study. It was estimated at .82 or greater in another recent study with university students (Bar et al. 2020 ). The six items had an internal consistency of .79. Dependency Dependency was assessed with a six-item measure that consisted of items we selected for inclusion based on our review of the dependence facet of the Depression Experiences Questionnaire (see Blatt et al. 1995) . We included items such as "I often think of the danger of losing someone who is close to me" and "Without support from others who are close to me, I would be helpless." We included five-items worded so that higher scores reflected dependence as well as one item that was reverse-scored. Items were rated using a scale that ranged from 1 (not at all) to 7 (very much). The six items had an internal consistency of .73. The Single-Item Self-Esteem Scale This one-item scale was constructed as a briefer assessment of self-esteem than existing measures such as the Rosenberg Self-Esteem Scale (see Robins et al. 2001) . Respondents indicated their level of agreement with the item "I have high self-esteem" on a scale that ranged from 1 (not very true of me) to 5 (very true of me). This very brief measure and the Rosenberg Self-Esteem Scale yield comparable patterns of results (e.g., Robins et al. 2001 ). This measure is particularly well suited to assessments in which economy of measurement is a consideration. The General Mattering Scale (GMS) The GMS developed by Marcus and Rosenberg (1987) is a five-item scale that measures the extent to which people perceive that they matter to others. A representative item is "How important do you feel you are to other people?" Items were rated using a scale that ranged from 1 (not at all) to 4 (a lot). Higher scores indicate greater levels of perceived mattering. Factor analysis has shown that this scale is a unidimensional measure with good reliability and validity (Taylor and Turner 2001) . The internal consistency of this measure in our study was .85. The five-item Anti-Mattering Scale (AMS; Flett 2018; Flett et al. 2020a, b) measures the extent to which individuals feel like they do not matter to others. It is designed to parallel the GMS but focuses on feelings of not mattering to reflect a perspective of feeling marginalized and it predicts unique variance in outcomes beyond what is explained by the GMS (see Flett et al. 2020a, b) . Sample items include "How much do you feel like you don't matter?" and "How often have you been treated in a way that makes you feel like you are insignificant?" Items were rated using a scale that ranged from 1 (not at all) to 4 (a lot). Higher scores on this scale indicate greater levels of anti-mattering. The AMS items had an internal consistency of .87 in the current study. Fear of Not Mattering Scale This five-item measure is newly created by Flett (2020) to assess the fear of becoming insignificant and unimportant to other people. Sample items include "Are you afraid that you will not matter to other people?" and "Do you worry that others will see you as unimportant or insignificant?" Items were rated on a scale that ranged from 1 (not at all) to 4 (a lot). Evidence from another sample of Canadian university students attests to the internal consistency of this measure with an alpha of .91 (McComb et al. 2020) . The internal consistency for this measure in the current study was .91. ATQ-N This eight-item scale was created by Netemeyer et al. (2002) as a short version of the 30-item Automatic Thoughts Questionnaire (Hollon and Kendall 1980) . This version includes items that were selected because they had item-total correlations of .50 or greater and two items each were selected to reflect the four themes found to characterize the original 30-item ATQ by Hollon and Kendall (1980) . Analyses by Netemeyer et al. (2002) confirmed that the eight-item version had a high internal consistency of .92 and scores on this measure were very highly correlated with overall scores on the 15-item and 30-item versions. This version included items such as "I'm worthless," "I'll never make it," and "I'm so disappointed in myself." Item responses ranged from 1 (not at all) to 5 (almost all the time). The internal consistency for this instrument was .90 in the current sample. ATQ-P This scale was a five-item measure based on items taken from the positive thoughts version of the 30-item Automatic Thoughts Questionnaire (see Ingram et al. 1995) . It has items such as "I am a lucky person," "My future looks bright," and "There are many people who care about me." Items were rated on a scale that ranged from 1 (not at all) to 5 (almost all the time). The internal consistency was .80 in the current sample. Loneliness Automatic Thoughts Questionnaire The Loneliness Automatic Thoughts Questionnaire (LATQ; Flett et al. 2020a, b) was developed for the present study. It consists of nine thoughts that are related to the current experience of loneliness. The concept of lonelinessrelated automatic thoughts was informed by case accounts of lonely people characterized by ruminative thoughts and tendencies (e.g., Cheng and Merrick 2017; Lui 2017; Tarocchi et al. 2013) . It was also informed by the work by Horowitz and associates (Horowitz et al. 1982 ) that described a fuzzy set prototype for loneliness that linked loneliness almost inextricably with depression and associated negative judgments of the self. Relevant themes include a sense of being different from others, something being wrong with the self, and a sense of "I cannot" that is typically applied to positive interpersonal behavior but we saw as relevant to not being able to control thoughts about loneliness. Sample items include "Why I am so lonely?" "I can't escape this loneliness," and "I can't stand to feel this alone." We began with a 23-item pool. These items were then reduced to 12 items based on evaluations of item-wording and face validity. These were re-evaluated and reduced to 10 items with two items being slightly re-worded. These items were then administered to our participants with instructions and response format similar to those used for the ATQ-N. Subsequent item analyses showed that when considered collectively, the 10 items had a high level of internal consistency and this was reflected in item-total correlations. However, one item had a mean that was lower than 2.00 and a standard deviation lower than 1.00 so this item was removed. The nine remaining items had an internal consistency of .92. The items along with scale features are shown in the Appendix. UCLA Loneliness Scale-8 (ULS-8) An eight-item short form of the 20-item UCLA Loneliness Scale developed by Hays and DiMatteo (1987) was included to assess the frequency levels of overall loneliness from 1 (never) to 4 (always). The eight items were selected because they all loaded highly on a single factor. The authors reported a correlation of .91 between their eightitem version and the full 20-item version. Short-form versions of the UCLA Loneliness Scale have been used successfully in various studies (e.g., Franzoi and Davis 1985) . The internal consistency of this measure was .79 in the current study. This instrument captured the extent to which participants reported emotional experiences since the onset of social distancing and remote learning due to the COVID-19 pandemic (i.e., current experience) as well as retrospective ratings of their typical level of these same emotional experience in regular times (i.e., typical experience). The mood states assessed are listed below. We assessed both current ratings and ratings of prior experience because we felt it was important to illustrate and document how the pandemic has impacted college students. Ideally, we would have had the opportunity to assess students prior to the pandemic but, of course, the pandemic was not foreseeable. Nevertheless, our primary focus was on current mood states reflecting the three categories described below. These retrospective ratings also served to underscore that relative to before it was experienced, the pandemic impacted both positive and negative mood states. One-item adjectives were included to assess current mood states. Participants made sevenpoint ratings of their current feelings. We grouped the adjectives into three categories that we labeled distress (stress, depression, and anxiety), negative mood (frustration, helplessness, and boredom), and positive mood (optimism, satisfaction, and enjoyment). Although these adjectives were quite different from each other, our composite measures yielded adequate levels of internal consistency. The respective alphas were .84, .74, and .82 for distress, negative mood, and positive mood. Higher scores reflect higher levels of distress, negative mood, and positive mood. As seen in Table 2 , the correlation between distress and negative mood was stronger, as would be expected, than the negative links that distress and negative mood had with positive mood. Initially, we examined whether students assimilated to the experience of social distancing and isolation from campus and online distance learning (approximately 75% of the second semester had already been completed online). Paired samples t tests were used to examine mean differences for participants reported general vs. current emotional experiences (distress, negative mood, and positive mood experiences). Next, zero-order Pearson correlational analyses were then conducted to examine the associations among all of the measures. The hypotheses for the present study were consistent with a path model by which personality predisposition variables were assumed to be associated with mattering variables and both were assumed to be associated with automatic thoughts variables with mattering variables being expected to add to the explained variance in automatic thoughts beyond what could be explained by the personality variables. Moreover, personality predispositions, mattering, and automatic thoughts variables were assumed to be associated with adaptability to the COVID-19 pandemic, with mattering expected to add to the explained variance in adaptability to the COVID-19 pandemic beyond what could be explained by the personality variables. Moreover, the automatic thoughts variables were expected to add to the explained variance in adaptability to the COVID-19 above and beyond what could be explained by the personality and mattering variables. Finally, personality, mattering, automatic thoughts, and adaptability were assumed to be associated with emotional mood states. Each was expected to add to the explained variance in current mood states above and beyond the previous predictors. Figure 1 presents the assumed model. Linear and hierarchical linear regressions that increase in their complexity were used to examine the associations presented in this model. Results for the analyses of this model are presented in Tables 3, 4 , 5, and 6. 1 As can be seen in Table 1 , participants reported significantly higher levels of experiences in all current distress subscales (stress, anxiety, and depression) and current negative mood subscales (frustration, helplessness, and boredom) compared to their experience in general routine daily life. Moreover, they reported significantly lower levels of experiences in all current positive mood subscales (optimism, satisfaction, and enjoyment) compared to their experience in general routine daily life. The largest impact was found with respect to feelings of helplessness during the pandemic. Significant correlations among general and current emotional experiences indicate relative stability. The effect sizes for these paired samples t tests were computed according to Cohen (1988) . As can be seen in Table 1 , the results indicate that the effect sizes ranged from small (Cohen's d = 0.29) to high (Cohen's d = 0.82), and overall, they were medium in magnitude (mean Cohen's d = 0.57, SD = .13; see Chen et al. 2010) . Overall, results indicated that students were not assimilated to the protracted experience of social distancing and isolation from campus and they continued to experience the time of pandemic as less positive in all aspects of emotional experiences. The current levels of composite scores of distress, negative mood, and positive mood scales were used in subsequent analyses. We used composite scores to assess current levels of distress, negative mood states, and positive mood states in further analyses. This decision had the effect of reducing the number of outcome variables (from 9 to 3) and subsequently the number of models to be examined and related errors associated with a larger number of analyses. Table 2 presents the zero-order correlations among the study variables. As can be seen in Table 2 , study variables were all significantly correlated in the expected directions. As can be seen in Fig. 1 , the present study contains five groups of predictive/outcome variables: personality predisposition variables (predictive), mattering variables (predictive and outcome), automatic thoughts variables (predictive and outcome), and adaptability variable (predictive and outcome) and current mood states (final outcomes). A series of linear and hierarchical linear regressions that increased in their complexity was performed to examine the unique contribution of each variable within each of the four predictors groups as well as the unique contribution of each group of predictors above and beyond the previous groups of predictor variables. Fig. 1 The multivariate analyses model. Results for the arrow marked "A" are presented in Table 3 , results for the arrows marked "B" are presented in Table 4 , results for the arrows marked "C" are presented in Table 5 , and the results for the arrows marked "D" are presented in Table 6 Table International Journal of Mental Health and Addiction Table 2 Zero-order correlations, means, and standard deviations for the variables Multivariate analyses were designed to examine the associations presented in Fig. 1 . This model consisted of four stages: A. Association of personality predispositions and mattering variables As can be seen in Table 3 (arrows marked as "A"), while controlling for the shared variance of self-criticism, dependency and self-esteem, self-esteem was positively associated with mattering and negatively associated with anti-mattering and fear of not mattering. Dependency and self-criticism were positively associated with fear of not mattering and anti-mattering, whereas self-criticism was negatively associated with mattering. This model significantly explained 20% of the variance in mattering, 30% of the variance in antimattering, and 46% of the variance in fear of not mattering. As can be seen in Table 4 (arrows marked as "B"), controlling for shared variance of personality variables as well as among the mattering variables and their shared variance with the personality variables, both personality trait vulnerability measures explained significant variance in automatic thoughts related to the COVID-19 pandemic. Specifically, dependency and self-criticism were positively associated with negative automatic thoughts, loneliness, and loneliness automatic thoughts and were negatively associated with positive automatic thoughts. Self-esteem was negatively associated with negative automatic thoughts and loneliness and positively with positive automatic thoughts and negatively with loneliness automatic thoughts (p < .06 two-tailed). Mattering variables were found to significantly add to the explained variance in automatic thoughts beyond the explained variance attributed to the personality measures. Specifically, mattering was negatively associated with negative automatic thoughts, loneliness, and loneliness automatic thoughts, whereas it was positively associated with positive automatic thoughts. In contrast, anti-mattering was found to be positively associated with negative automatic thoughts, loneliness, and loneliness automatic thoughts, whereas it was negatively associated with positive automatic thoughts. Fear of not mattering was only uniquely and significantly associated with high levels of loneliness. This model significantly explained 51% of the variance in negative automatic thoughts, 45% of the variance in positive automatic thoughts, 58% of the variance in loneliness, and 40% of the variance in loneliness automatic thoughts. C. Associations of personality predispositions, mattering variables, and automatic thoughts variables with adaptability to the COVID-19 pandemic As can be seen in Table 5 (arrows marked as "C"), controlling for substantial shared variance, all personality variables were significantly associated with adaptability to the COVID-19 pandemic with dependency and self-criticism having negative associations with adaptability, whereas self-esteem was positively associated with adaptability. Mattering variables added significantly to the explained variance in adaptability beyond the variance explained by the personality variables with mattering having a positive association with adaptability and anti-mattering having a negative association with adaptability. Fear of not mattering was not significantly associated with adaptability. Finally, the automatic thoughts variables significantly added to the explained variance beyond the variance explained by the personality and the mattering variables with negative automatic thoughts being negatively associated with adaptability and positive automatic thoughts being positively associated with adaptability. Loneliness and loneliness automatic thoughts were not found to be associated with adaptability. This model significantly explained 40% of the variance in adaptability to the COVID-19 pandemic. D. Associations of personality predispositions, mattering variables, automatic thoughts variables, and adaptability to the COVID-19 pandemic with emotional experiences variables As can be seen in Table 6 (arrows marked as "D"), controlling for extensive shared variance, dependency and self-criticism were positively associated with distress and negative current mood state but negatively associated with positive current mood states. In contrast, self-esteem was negatively associated with distress and negative mood (p < .08 two-tailed) but positively associated with positive current mood. Mattering variables added significantly to the explained variance in current mood states with mattering being negatively associated with distress and positively associated with positive mood. anti-mattering was positively associated with distress and negative mood, whereas fear of not mattering was only positively associated with distress. Automatic thoughts added significantly to the explained variance in current mood states beyond the explained variance of personality variables and mattering variables. Specifically, negative automatic thoughts were positively associated with distress and negative mood, whereas positive automatic thoughts were negatively associated with distress and negative mood but positively associated with positive mood. Notably, loneliness scores were not significantly associated with current mood states, whereas loneliness automatic thoughts were positively associated with distress and negative mood but negatively associated with positive mood. Finally, adaptability to the COVID-19 pandemic significantly added in the expected directions to the explained variance in current mood states beyond the variance explained by the personality, mattering, and automatic thoughts variables despite the correlations between adaptability and the other predictors. This model explained 52%, 43%, and 42% of the variance in distress, negative mood, and positive mood, respectively. The current study examined the extent to which personality vulnerability factors reflecting risk and other personality factors representing resilience were associated with adaptability to the pandemic and associated thoughts and emotional experiences. A central theme of this research was the role of an internalized positive sense of self reflected by self-esteem and feelings of mattering versus a negative sense of self in being able to adjust successfully to the unique challenges posed by the COVID-19 global health crisis and all of the changes brought about as a result of this pandemic. A particular focus of this study was to evaluate individual differences factors (e.g., dependency, self-criticism) that could be used to account for the extent to which university students were able to adapt to conditions requiring them to engage in physical isolation that led to significant reductions in social contact. Most notably, in addition to examining individual difference factors that were possibly associated with indicators that reflect current levels of stress, distress, and positive mood, the current study also examined the correlates of feelings of loneliness as the pandemic continued to spread. Our approach to loneliness represents a distinguishing feature of this study because we went beyond usual forms of assessment (e.g., relying on a measure such as the UCLA Loneliness Scale) to also consider loneliness from a cognitive perspective. This reflected our attempt to identify people who are less able to adapt and cope with feelings of loneliness. This extended approach was designed to capture feelings of isolation during the pandemic that are accompanied by thoughts concerning loneliness as well as how to potentially escape this loneliness. As expected, it was found that both self-criticism and dependency were associated with more negative emotional reactions and elevated levels of loneliness as well as lower reported adaptability to the pandemic. This study is unique in that it examined self-criticism and dependency in a particular life context (i.e., the COVID-19 pandemic and all of its uncertainties and disruptions) and few previous studies of these personality traits have examined them within the context of an ongoing stress situation that is impactful and highly relevant. One indication of the merits of examining these constructs in this situational context is the results obtained with the two loneliness measures. It has typically been the case in previous investigations, including some of our own past work, that self-criticism has a stronger association with loneliness than dependency does. This was not the case in the current study. Both self-criticism and dependency were positively associated with loneliness and loneliness automatic thoughts, but it is worth noting that, in both instances, the associations for dependency were stronger than those observed for self-criticism, and this was especially the case in terms of the link between dependency and loneliness automatic thoughts. These findings are in keeping with the notion that it is useful to examine personality constructs from a personalitysituation interaction perspective (see Flett et al. 1995) and dependency in particular is best examined in circumstances that involve interpersonal challenges such as the reduced social contact being experienced by our participants. As noted earlier, a positive self-model was reflected in the current study by the inclusion of measures of self-esteem and mattering. The obtained pattern of results is consistent with the general premise that resilience is associated with a positive view of the self. Both self-esteem and mattering were associated with higher reported levels of adaptability to the pandemic. Self-esteem and mattering were also associated with more positive emotions and positive automatic thoughts as well as fewer negative emotions and negative automatic thoughts. The findings linking mattering with both greater adaptability and reports of better emotional functioning are in keeping with calls to promote mattering among college and university students (see Flett et al. 2019 ) and the potential applications of emphasizing the relational side of the student experience. This emphasis may be particularly important when students have limited on-campus opportunities to connect with others. Multiple measures of mattering were included in the present study due to our expectation that having a sense of mattering to other people and a lower sense of fear about not mattering would constitute a source of reassurance and comfort for people coping with less social contact with significant people in their lives. This expectation was supported because students in this study reported less loneliness and fewer loneliness-related thoughts if they had higher levels of mattering and without the feeling and fear of not mattering to others. These results are in keeping with broad calls to promote feelings of mattering to help people cope with the isolation and other stressors and strains associated with living through the current global health crisis (see Flett and Heisel 2020; Flett and Zangeneh 2020) . Clearly, the central focus of our study involved individual differences in the perceived levels of ability to adapt to the pandemic reported by our participants. The conceptual approach guiding this work was that adaptability to the pandemic is the most proximal factor in a conceptualization that combines distal factors with more proximal influences (see Fig. 1 ). The relevance and salience of perceived adaptability was illustrated through its robust links with lower levels of negative emotion and higher levels of positive emotion as well as the strong links that it had with a greater frequency of positive automatic thoughts and less reported automatic thoughts reflecting the negative and loneliness. Perhaps most revealing were the results showing that perceived adaptability explained unique variance in levels of distress and mood states even when controlling for personality, mattering, and automatic thoughts. These findings are noteworthy in several respects. For instance, these results demonstrate that self-reports of perceived adaptability as measured in the current study do not simply reflect a positive self-orientation and inflated sense of the self because numerous measures tapping into the self-concept had already been included as predictors and these variables have significant overlap with our measure of self-reported adaptability. More importantly, from a conceptual perspective, these findings attest to the role played by perceived adaptability and the need to include perceived adaptability as a central factor in models being developed to account for how people adjust to challenging circumstances that can impact their health and well-being and social worlds. While it is important to consider resilience and the ability to bounce back from setbacks, it is also important to consider the ability to adapt and adjust to new, uncertain situations that can dramatically transform daily life. The pattern of correlational results involving adaptability to the pandemic can be seen as yielding some unique insights into the nature of the adaptability construct. For instance, the cognitive component of adaptability includes a perceived capability to revise and adjust thinking in a way that is similar to cognitive flexibility. Our results indicate that cognitive adaptability is reflected by a tendency to experience frequent positive automatic thoughts while having relatively few negative automatic thoughts and thoughts about feelings of loneliness. One implication is that the promotion of adaptability will likely facilitate the type of cognitive orientation that would benefit people in the same way that effective cognitive behavior therapy and mindfulness training benefits people. More generally, our results illustrate that meaningful individual differences are being measured when adaptability is focused specifically on adaptability to the pandemic. The adaptability construct is itself quite adaptable because it is typically measured as a general disposition, but clearly, it can also be assessed with reference to specific contexts. Presumably, the measure could be modified to tap specific adaptability to any meaningful transition involving significant life change. The focus of this study has been on a variable-centered approach, but it is important to consider these implications from a person-centered perspective, especially for students who do not perceive themselves as having adapted very well to the challenges posed by the pandemic. These students are seemingly characterized by a preponderance of negative emotional states and considerable stress, and they will also have frequent thoughts. These thoughts will not often be positive in nature and instead reflect depressogenic themes and thoughts about their loneliness. At an academic level, these characteristics and tendencies should limit the ability and capacity of these students to fully engage with the transition to online synchronous learning. Cognitive preoccupations and emotional arousal should add substantially to the challenges facing students who were already grappling with issues involving anxiety, depression, and self-doubt prior to the onset of the pandemic. Unfortunately, this will be at a time when social support and positive distractions are even less available as daily routines are disrupted and there is little opportunity to pursue long-term goals. If taken to the extreme, it is easy to envision a demoralized student who may be feeling trapped and perhaps defeated by current circumstances. Collectively, our findings are consistent with calls to address the mental health concerns of people who are feeling overwhelmed and not able to easily adapt to their current and ongoing stressful situation. Our findings point to the need to direct counseling resources to students who have tendencies to be dependent and self-critical. These efforts can include preventive and proactive measures taken to promote general feelings of mattering and specific feelings of mattering and belonging in the university community in line with recommendations put forth by Flett and associates (Flett et al. 2019 ). As noted above, one unique element of this study is that we included a newly developed measure of automatic thoughts about loneliness-the LATQ. Scores on this measure were associated with more frequent negative automatic thoughts and less frequent positive automatic thoughts. It is especially worth noting that this measure was associated with the ULS-8, but it was clearly not redundant with the ULS-8 in that the inclusion of the LATQ yielded some new insights. For instance, it was found that people with higher trait levels of dependency and self-criticism also report a tendency to experience thoughts about their loneliness. Also, the correlations displayed in Table 2 indicate that the link found between feelings of not mattering and loneliness reported by Flett and associates (Flett et al. 2016) was not only replicated in the current study, it was shown that frequently occurring and perhaps repetitive thoughts about loneliness are also associated with feelings of not mattering. This should be especially problematic for people who are physically isolating during the pandemic and have too much time to themselves, so they will likely have too many opportunities to consider just how alone and unimportant they seem from their own perspective. It is conceivable that frequent thinking and over-thinking about feelings of loneliness is a key element that is contributing directly to growing reports of the much higher prevalence of mental health problems and drug-related problems, including overdoses. It is useful to consider the current findings within the context of certain limitations. First, our current results are based on cross-sectional research and, as such, no inferences can be made about possible causal associations. Second, self-reports are always susceptible to response bias and this study is no exception. Third, it would have been helpful to assess more elements of the current living situations of our respondents so that we could take into account some unique elements that make the pandemic easier or more difficult to contend with. Finally, although it was not a central feature of our study, we must acknowledge the limitations inherent in asking our participants to retrospectively report on their emotional states as a supplement to their reports of current emotional states. These retrospective accounts may be substantially prone to bias and distortion but as we noted earlier, these reports were included to further underscore just how different life is for students now as they cope with the pandemic versus their sense of how things used to be for them. The results shown in Table 1 indicate that even though our sample included students who saw themselves as adapting reasonably well to their new circumstances, there were exceptionally large differences in current mood states compared with the seemingly halcyon days before the COVID-19 pandemic. These differences extended to positive emotional experiences as students reported less optimism, enjoyment, and satisfaction during these "pandemic days." In summary, the results of the current study illuminated the differences among university students in their reactions to the COVID-19 pandemic. There were meaningful individual differences in reported adaptability to the pandemic and this was reflected in emotional and cognitive tendencies that apparently enhanced or detracted from the ability to adjust to this ongoing stressful situation. Support was found for our emphasis on the resilience that accompanies having established a positive sense of self and the risk and vulnerability that accompanies having a negative sense of self that can further fuel self-criticism, dependency, low self-esteem, and feelings of not mattering to other people. More generally, the current study attests to the value in conducting research from an individual difference perspective within the context of actual and ongoing life contexts that add a component that is often lacking from other research investigations. Conflict of Interest The authors declare that they have no conflicts of interest. Listed below are a variety of thoughts that pop into people's heads. Please read each thought and indicate how frequently, if at all, the thought occurred to you over the last week. Please read each item carefully and circle the appropriate answers on the answer sheet in the following fashion (1 = "not at all," 2 = "sometimes," 3 = "moderately often," 4 = "often," and 5 = "almost all the time").
Before the first outbreak of severe acute respiratory syndrome (SARS), a limited number of coronaviruses were known to be circulating in humans, causing only mild illnesses, such as the common cold [1] . Following the 2003 SARS pandemic [2, 3] , it became apparent that coronaviruses could cross the species barrier and cause life-threatening infections in humans. Therefore, further attention needs to be paid to these new coronaviruses. The 21st century has seen the worldwide spread of two previously unrecognized coronaviruses, the severe acute respiratory syndrome coronavirus (SARS-CoV) [4] and Middle East respiratory syndrome coronavirus (MERS-CoV), both of which are highly pathogenic. Starting from November 2002 in China [5] , there have been unprecedented nosocomial transmissions from person to person of SARS-CoV, accompanied by high fatality rates. A united global effort led to the rapid identification of the SARS coronavirus and remarkable scientific advancements in epidemic Among the reported cases of SARS, 22% were healthcare workers in China and more than 40% were healthcare workers in Canada [23] . Nosocomial transmission for MERS has similarly been seen in the Middle East [16] and in the Republic of Korea [22] . Outbreaks in other countries all resulted from the reported cases in the Middle East or North Africa, and transmission was the result of international travel. Both SARS and MERS caused large outbreaks with significant public health and economic consequences. Table 1 . Epidemiology and biological characteristics of the severe acute respiratory syndrome coronavirus (SARS-CoV) and the Middle East respiratory syndrome coronavirus (MERS-CoV). Genus Length of nucleotides 29,727 30,119 Open reading frames (ORFs) 11 11 Structural protein 4 4 Spike protein ( Non-structural proteins (NSPs) At least 5 16 Accessory proteins 8 5 A characteristic gene order 5 -replicase ORF1ab, spike (S), envelope (E), membrane (M), and nucleocapsid (N)-3 Coronaviruses are the largest kind of positive-strand RNA viruses (26-32 kb) as they are about 125 nm in diameter [24] , and comprise four genera (alpha-, beta-, gamma-, and delta-coronavirus) [25] . Currently, six human CoVs (HCoVs) have been confirmed: HCoV-NL63 and HCoV-229E, which belong to the alpha-coronavirus genus; and HCoV-OC43, HCoV-HKU1, SARS-CoV, and MERS-CoV, which belong to the beta-coronavirus genus. SARS-CoV and MERS-CoV are the two major causes of severe pneumonia in humans and share some common coronavirus structural characteristics. Similarly, their genomic organization is typical of coronaviruses, having an enveloped, single, positive-stranded RNA genome that encodes four major viral structural proteins, namely spike (S), envelope (E), membrane (M), and nucleocapsid (N) proteins 3-5, that follow the characteristic gene order [5 -replicase (rep gene), spike (S), envelope (E), membrane (M), nucleocapsid (N)-3 ] with short untranslated regions at both termini ( Figure 1 ). The viral membrane contains S, E, and M proteins, and the spike protein plays a vital functional role in viral entry. The rep gene encodes the non-structural protein and constitutes approximately two-thirds of the genome at the 5 end. In detail, the S protein is in charge of receptor-binding and subsequent viral entry into host cells, and is therefore a major therapeutic target [26, 27] . The M and E proteins play important roles in viral assembly, and the N protein is necessary for RNA synthesis. The SARS-CoV genome has 29,727 nucleotides in length, including 11 open reading frames (ORFs). The SARS-CoV rep gene, containing about two-thirds of the genome, encodes at least two polyproteins (encoded by ORF1a and ORF1b) that undergo the process of cotranslational proteolysis. Between ORF1b and S of group 2 and some group 3 coronaviruses, there is a gene that encodes hemagglutinin-esterase [4] , while this was not detected in SARS-CoV. This virus is significantly different from previously reported coronaviruses for many reasons, such as the short anchor of the S protein, the specific number and location of small ORFs, and the presence of only one copy of PLP pro . The MERS-CoV genome is larger than that of SARS-CoV at 30,119 nucleotides in length, and comprises a 5 terminal cap structure, along with a poly (A) tail at the 3 end, as well as the rep gene containing 16 non-structural proteins (nsp1-16) at the 5 end of the genome. Four structural proteins (S, E, M, and N) and five accessory proteins (ORF3, ORF4a, ORF4b, ORF5, and ORF8) constitute about 10 kb at the 3 end of the genome. Unlike some other beta-coronaviruses, the MERS-CoV genome does not encode a hemagglutinin-esterase (HE) protein [1] . Genomic analysis of MERS-CoV implies the potential for genetic recombination during a MERS-CoV outbreak [9] . MERS-CoV and SARS-CoV possess five and eight accessory proteins, respectively, which might help the virus evade the immune system by being harmful to the innate immune response. These differences might lead to greater sensitivity to the effects of induction and signaling of type 1 interferons (IFNs) in MERS-CoV than SARS-CoV. does not encode a hemagglutinin-esterase (HE) protein [1] . Genomic analysis of MERS-CoV implies the potential for genetic recombination during a MERS-CoV outbreak [9] . MERS-CoV and SARS-CoV possess five and eight accessory proteins, respectively, which might help the virus evade the immune system by being harmful to the innate immune response. These differences might lead to greater sensitivity to the effects of induction and signaling of type 1 interferons (IFNs) in MERS-CoV than SARS-CoV. The single-stranded RNA genomes of SARS-CoV and MERS-CoV encode two large genes, the ORF1a and ORF1b genes, which encode 16 non-structural proteins (nsp1-nsp16) that are highly conserved throughout coronaviruses. The structural genes encode the structural proteins, spike (S), envelope (E), membrane (M), and nucleocapsid (N), which are common features to all coronaviruses. The accessory genes (shades of green) are unique to different coronaviruses in terms of number, genomic organization, sequence, and function. The structure of each S protein is shown beneath the genome organization. The S protein mainly contains the S1 and S2 subunits. The residue numbers in each region represent their positions in the S protein of SARS and MERS, respectively. The S1/S2 cleavage sites are highlighted by dotted lines. Both SARS and MERS cause severe pneumonia resulting from these novel coronaviruses, sharing some similarities in their pathogenesis ( Figure 2) [28]. SARS is an emerging infectious viral disease characterized by severe clinical manifestations of the lower respiratory tract, resulting in diffuse alveolar damage. SARS-CoV spreads through respiratory secretions, such as droplets, via direct person-to-person contact. Upon exposure of the host to the virus, the virus binds to cells expressing the virus receptors, of which the angiotensinconverting enzyme 2 (ACE2) is one of the main receptors, and CD209L is an alternative receptor with a much lower affinity [29] . In the respiratory tract, ACE2 is widely expressed on the epithelial cells The single-stranded RNA genomes of SARS-CoV and MERS-CoV encode two large genes, the ORF1a and ORF1b genes, which encode 16 non-structural proteins (nsp1-nsp16) that are highly conserved throughout coronaviruses. The structural genes encode the structural proteins, spike (S), envelope (E), membrane (M), and nucleocapsid (N), which are common features to all coronaviruses. The accessory genes (shades of green) are unique to different coronaviruses in terms of number, genomic organization, sequence, and function. The structure of each S protein is shown beneath the genome organization. The S protein mainly contains the S1 and S2 subunits. The residue numbers in each region represent their positions in the S protein of SARS and MERS, respectively. The S1/S2 cleavage sites are highlighted by dotted lines. SARS-CoV, severe acute respiratory syndrome coronavirus; MERS-CoV, Middle East respiratory syndrome coronavirus; CP, cytoplasm domain; FP, fusion peptide; HR, heptad repeat; RBD, receptor-binding domain; RBM, receptor-binding motif; SP, signal peptide; TM, transmembrane domain. Both SARS and MERS cause severe pneumonia resulting from these novel coronaviruses, sharing some similarities in their pathogenesis ( Figure 2) [28]. SARS is an emerging infectious viral disease characterized by severe clinical manifestations of the lower respiratory tract, resulting in diffuse alveolar damage. SARS-CoV spreads through respiratory secretions, such as droplets, via direct person-to-person contact. Upon exposure of the host to the virus, the virus binds to cells expressing the virus receptors, of which the angiotensin-converting enzyme 2 (ACE2) is one of the main receptors, and CD209L is an alternative receptor with a much lower affinity [29] . In the respiratory tract, ACE2 is widely expressed on the epithelial cells of alveoli, trachea, bronchi, bronchial serous glands [30] , and alveolar monocytes and macrophages [31] . The virus enters and replicates in these target cells. The mature virions are then released from primary cells and infect new target cells [32] . Furthermore, as a surface molecule, ACE2 is also diffusely localized on the endothelial cells of arteries and veins, the mucosal cells of the intestines, tubular epithelial cells of the kidneys, epithelial cells of the renal tubules, and cerebral neurons and immune cells, providing a variety of susceptible cells to SARS-CoV [33, 34] . Respiratory secretions, urine, stools, and sweat from patients with SARS contain infective viral particles, which may be excreted into and contaminate the environment. Atypical pneumonia with rapid respiratory deterioration and failure can be induced by SARS-CoV infection because of increased levels of activated proinflammatory chemokines and cytokines [35] . For MERS-CoV infection of humans, the primary receptor is a multifunctional cell surface protein, dipeptidyl peptidase 4 (DPP4, also known as CD26) [36] , which is widely expressed on epithelial cells in the kidney, alveoli, small intestine, liver, and prostate, and on activated leukocytes [37] . Consistent with this, MERS-CoV can infect several human cell lines, including lower respiratory, For MERS-CoV infection of humans, the primary receptor is a multifunctional cell surface protein, dipeptidyl peptidase 4 (DPP4, also known as CD26) [36] , which is widely expressed on epithelial cells in the kidney, alveoli, small intestine, liver, and prostate, and on activated leukocytes [37] . Consistent with this, MERS-CoV can infect several human cell lines, including lower respiratory, kidney, intestinal, and liver cells, as well as histiocytes, as shown by a cell-line susceptibility study [38] , indicating that the range of MERS-CoV tissue tropism in vitro was broader than that of any other CoV. MERS-CoV causes acute, highly lethal pneumonia and renal dysfunction with various clinical symptoms, including-but not restricted to-fever, cough, sore throat, myalgia, chest pain, diarrhea, vomiting, and abdominal pain [39, 40] . Lung infection in the MERS animal model demonstrated infiltration of neutrophils and macrophages and alveolar edema [41] . The entry receptor (DPP4) for MERS-CoV is also highly expressed in the kidney, causing renal dysfunctions by either hypoxic damage or direct infection of the epithelia [42] . Remarkably, unlike SARS-CoV, MERS-CoV has the ability to infect human dendritic cells [43] and macrophages [44] in vitro, thus helping the virus to disrupt the immune system. T cells are another target for MERS-CoV because of their high amounts of CD26 [45] . This virus might deregulate antiviral T-cell responses due to the stimulation of T-cell apoptosis [45, 46] . MERS-CoV might also lead to immune dysregulation [47] by stimulating attenuated innate immune responses, with delayed proinflammatory cytokine induction in vitro and in vivo [44, 48, 49] . Trimers of the S protein make up the spikes of SARS-CoV and provide the formation of a 1255-amino-acids-length surface glycoprotein precursor. Most of the protein and the amino terminus are situated on the outside of the virus particle or the cell surface [50] . The expected structure of the S protein comprises four parts: a signal peptide located at the N terminus from amino acids 1 to 12, an extracellular domain from amino acids 13 to 1195, a transmembrane domain from amino acids 1196 to 1215, and an intracellular domain from amino acids 1216 to 1255. Proteases such as factor Xa, trypsin, and cathepsin L cleave the SARS-CoV S protein into two subunits, the S1 and S2 subunits. A minimal receptor-binding domain (RBD) located in the S1 subunit (amino acids 318-510) can combine with the host cell receptor, ACE2. The RBD displays a concave surface during interaction with the receptor. The entire receptor-binding loop, known as the receptor-binding motif (RBM) (amino acids 424-494), is located on the RBD and is responsible for complete contact with ACE2. Importantly, two residues in the RBM at positions 479 and 487 determine the progression of the SARS disease and the tropism of SARS-CoV [51, 52] . Recent studies using civets, mice, and rats demonstrated that any change in these two residues might improve animal-to-human or human-to-human transmission and facilitate efficient cross-species infection [53] . The S2 subunit mediates the fusion between SARS-CoV and target cells, and includes the heptad repeat 1 (HR1) and HR2 domains, whose HR1 region is longer than the HR2 region. Similar to SARS-CoV, during the infection process, the S protein of MERS-CoV is cleaved into a receptor-binding subunit S1 and a membrane-fusion subunit S2 [54] [55] [56] [57] . The MERS-CoV S1 subunit also includes an RBD, mediating the attachment between virus and target cells [54, 55, 58, 59] . Unlike SARS-CoV, MERS-CoV requires DPP4 (also known as CD26) as its cellular receptor [60, 61] but not ACE2. The RBDs of MERS-CoV and SARS-CoV differ, although they share a high degree of structural similarity in their core subdomains, explaining the different critical receptors noted above [57, 62] . The core subdomain of RBD is stabilized by three disulfide bonds, and includes a five-stranded antiparallel β-sheet and several connecting helices. The RBM comprises a four-stranded antiparallel β-sheet for connecting to the core via loops [57, 62] . Two N-linked glycans, N410 and N487, are seated in the core and RBM, respectively. Particularly, the residues 484-567 of RBM take charge of interacting with the extracellular β-propeller domain of DPP4. The fusion core formation of MERS-CoV resembles that of SARS-CoV; however, it is different from that of other coronaviruses, such as the mouse hepatitis virus (MHV) and HCoV-NL63 [63] [64] [65] [66] . The SARS-CoV S protein plays pivotal roles in viral infection and pathogenesis [67, 68] . The S1 subunit recognizes and binds to host receptors, and the subsequent conformational changes in the S2 subunit mediate fusion between the viral envelope and the host cell membrane [69, 70] . The RBD in the S1 subunit is responsible for virus binding to host cell receptors [61, 70, 71] . ACE2 is a functional receptor for SARS-CoV that makes contact with 14 amino acids in the RBD of SARS-CoV among its 18 residues [53] . The RBD in the S1 subunit is responsible for virus binding to host cell receptors [61, 70, 71] . Position R453 in the RBD and position K341 in ACE2 play indispensable roles in complex formation [72] . Furthermore, the N479 and T487 in the RBD of the S protein are pivotal positions for the affinity with ACE2 [52] , and R441 or D454 in the RBD influences the antigenic structure and binding activity between RBD and ACE2 [73] . From a pre-fusion structure to a post-fusion structure, binding of the RBD in the S1 subunit to the receptor ACE2 stimulates a conformational change in S2. Accordingly, the supposed fusion peptide (amino acids 770-788) [74] builds in the target cell membrane of the host. Meanwhile, a six-helix bundle fusion core structure is made up by the HR1 and HR2 domains for bringing the viral envelope and the target cell membrane into close proximity and contributing to fusion [74] . Resembling the S2 subunit of SARS-CoV, the MERS-CoV S2 subunit is in charge of membrane fusion. The HR1 and HR2 regions in S2 play essential and complementary roles [56, 63] . In order to control the outbreak of viruses, vaccinations were developed against SARS-CoV and MERS-CoV. There are various approaches of different vaccines, and the development and advantages/disadvantages of these are listed in Table 3 (this table includes Importantly, among all the functional/non-functional structural proteins of SARS-CoV and MERS-CoV, the S protein is the principal antigenic component that induces antibodies to block virus-binding, stimulate host immune responses, fuse or neutralize antibodies and/or protect the immune system against virus infection. Therefore, the S protein has been selected as a significant target for the development of vaccines. It has been noted that antibodies raised against subunit S1 (amino acids 485-625) or S2 (amino acids 1029-1192) neutralize infection by SARS-CoV strains in Vero E6 cells [78, 79] . Researchers have constructed an attenuated parainfluenza virus encoding the full-length S protein of the SARS-CoV Urbani strain for the vaccination of African green monkeys. This vaccine could protect monkeys from subsequent homologous SARS-CoV infection, demonstrating highly effective immunization with the S protein [80] . Other studies in a mouse model structured a DNA vaccine encoding the full-length S protein of the SARS-CoV Urbani strain that not only induced T-cell and neutralizing-antibody responses, but also stimulated protective immunity [81] . Furthermore, monkeys or mice were vaccinated with a highly attenuated, modified vaccine virus, Ankara, encoding the full-length S protein of the SARS-CoV strain HKU39849 or Urbani [82] . However, full-length S protein-based SARS vaccines may induce harmful immune responses, causing liver damage in the vaccinated animals or enhancing infection after being challenged with homologous SARS-CoV [83, 84] . Researchers are thus concerned about the safety and ultimate protective efficacy of vaccines that include the full-length SARS-CoV S protein. There are still no commercial vaccines available against MERS-CoV [26]. Multiple vaccine candidates targeting the S protein, which is responsible for viral entry, have been developed, including subunit vaccines [85, 86] , recombinant vector vaccines [87, 88] , and DNA vaccines [89, 90] . Importantly, compared with other regions of the S protein, the RBD fragment induced the highest-titer IgG antibodies in mice [85] . Modified vaccines, including recombinant vectors of Ankara and adenoviruses expressing the MERS-CoV S glycoprotein, showed immunogenicity in mice [25] . Attenuated live vaccines also showed a protective function, but there were concerns regarding the degree of attenuation [91] . After intranasal vaccination with the CoV N protein, airway memory CD4 T cells were generated and mediated the protection following a CoV challenge [92] . These cells could induce anti-viral innate responses at an early stage of infection, and facilitated CD8 T-cell responses by stimulating recombinant dendritic cell migration and CD8 T-cell mobilization [92] . The stimulation of airway memory CD4 T cells should be regarded as an essential part of any HCoV vaccine strategy, because these CD4 T cells target a conserved epitope within the N protein that cross-reacts with several other CoVs [92] . Furthermore, DNA vaccines expressing the MERS-CoV S1 gene produced antigen-specific humoral and cellular immune responses in mice [89] . Phenotypic or genotypic reversion possible; need sufficient viral replication [77] . Genetically engineered unrelated viral genome with deficient packaging elements for encoding targeted gene Spike and nucleocapsid proteins [100, 104] Spike and nucleocapsid proteins [87, 88] Safety; stronger and specific cellular and humoral immune responses [77] . Varies inoculation routes may produce different immune responses [96] ; possibly incomplete protection; may fail in aged vaccinees; possible T H 2 cell-distortive immune response [105] . Antigenic components inducing the immune system without introducing viral particles, whole or otherwise. proteins [53, 59, 106] Spike and nucleocapsid proteins [85, 86, 107, 108] High safety; consistent production; can induce cellular and humoral immune responses; high-titer neutralizing antibodies [109] . Uncertain cost-effectiveness; relatively lower immunogenicity; need appropriate adjuvants [77] . Genetically engineered DNA for directly producing an antigen Spike and nucleocapsid proteins [110, 111] Spike and nucleocapsid proteins [89, 90] Easier to design; high safety; high-titer neutralizing antibodies [110] . Lower immune responses; potential T H 2 cell-distortive immune response results; potential ineffective; possibly delayed-type hypersensitivity [112] . Despite the presence of extensive research reporting on SARS-CoV and MERS-CoV therapies, it was not possible to establish whether treatments benefited patients during their outbreak. In the absence of fundamental, clinically proven, effective antiviral therapy against SARS-CoV and MERS-CoV, patients mainly receive supportive care supplemented by diverse combinations of drugs. Several approaches are being considered to treat infections of SARS-CoV [113] and MERS-CoV (Table 4 , MERS-CoV-related table previously reviewed by de Wit et al. in Nature Reviews Microbiology, 2016 [10] ), including the use of antibodies, IFNs, and inhibitors of viral and host proteases. The vital role of the S protein of SARS-CoV makes this protein an important therapeutic target, and numerous studies have explored potential therapeutics. Firstly, peptides that block RBD-ACE2-binding derived from both RBD [114] and ACE2 [76] could be developed as novel therapeutics against SARS-CoV infection. Secondly, peptides binding to the S protein interfere with the cleavage of S1 and S2. This step inhibits the production of functional S1 and S2 subunits and the consequent fusion of the viral envelope with the host cell membrane. Thirdly, anti-SARS-CoV peptides blocking the HR1-HR2 interaction by forming a fusion-active core have viral fusion inhibitory activity at the micromolar level [115] [116] [117] . However, the potential selection of escape mutants with altered host range phenotypes is one of the disadvantages of this strategy that needs further modification [118] . Furthermore, mouse monoclonal antibodies (mAbs) targeting assorted fragments of the SARS-CoV S protein have effectively inhibited SARS-CoV infection [79, [119] [120] [121] [122] . A series of neutralizing human mAbs were generated from the B cells of patients infected with SARS-CoV [123, 124] . Another strategy used human immunoglobulin transgenic mice immunized with full-length SARS-CoV S proteins [125] [126] [127] . 80R and CR3014 binding to the ACE2 receptor are examples of S-specific mAbs [128, 129] . Similarly, the therapeutic agents that have been developed against MERS-CoV are based on the S protein and basically restrain the binding of receptors or the fusion of membrane proteins, thereby leading to the inhibition of MERS-CoV infection. These methods mainly involve peptidic fusion inhibitors [56, 63, 116, 130] , anti-MERS-CoV neutralizing mAbs [86, 131] , anti-DPP4 mAbs [86, 132, 133] , DPP4 antagonists [134] , and protease inhibitors [135] [136] [137] . However, none of these anti-MERS-CoV curative agents are approved for commercial use in humans. Table 4 . Potential therapeutics for severe acute respiratory syndrome (SARS) and MERS. Stage of Development Host protease inhibitors Effective in mouse models [138] In vitro inhibition [138] Viral protease inhibitors In vitro inhibition [139] In vitro inhibition [140] Monoclonal and polyclonal antibodies Effective in mouse, ferrets, golden Syrian hamster [124, 141, 142] and non-human primate models [143, 144] Effective in mouse, rabbit, and non-human primate models [10, 145] Convalescent plasma Off-label use in patients [146, 147] Effective in a mouse model; clinical trial approved [10] Interferons Off-label use in patients (often in combination with immunoglobulins or thymosins) [146, 147] Effective in non-human primate models; off-label use in patients (often in combination with a broad-spectrum antibiotic and oxygen) [10] Ribavirin Off-label use in patients (often in combination with corticosteroids) [146, 147] Effective in a non-human primate model; off-label use in patients (often in combination with a broad-spectrum antibiotic and oxygen) [10] Lopinavir and ritonavir Off-label use in patients (improved the outcome in combination with ribavirin) [146, 147] Effective in a non-human primate model; off-label use in patients [10, 148] Common Feature None of these therapeutic agents are approved for commercial use in humans International coordination and cooperation led to the rapid identification of SARS-CoV and MERS-CoV. Emergency control measures and laboratory detection systems which were put in place in response to SARS-CoV and MERS-CoV outbreaks were both exemplary. To establish optimal prevention and control strategies for SARS and MERS, numerous efforts to develop animal models were undertaken in several laboratories, despite the fact that some conflicting results have been reported. It is therefore necessary to compare and document the features and disadvantages of different animal models to better understand viral replication, transmission, pathogenesis, prevention, and treatment. Notably, several animal species were suggested as suitable disease models of SARS-CoV, but most laboratory animals are refractory or only semi-permissive to MERS-CoV infection. SARS-CoV replication has been studied in mice, Syrian golden and Chinese hamsters, civet cats, and non-human primates. The most severe symptoms of SARS were observed in aged animals. To develop epidemiological symptoms that advanced age resulted in increased mortality, aged mouse model of SARS-CoV has been generated. Transgenic mice expressing human ACE2 were also developed to closely mimic SARS-CoV infection in humans. Some animal models have been tested and analyzed on the genomic and proteomics level to study the pathogenesis of SARS. Mouse species that have been used as SARS-CoV-infected animal models include BALB/c [149, 150] , C57BL6 (B6) [151] , and 129SvEv-lineage mice. The most relevant transgenic and knockout lines are accessible based on these susceptible animals [152] . Signal transducers and activators of transcription 1 (STAT1)-knockout and myeloid differentiation primary response 88 (MYD88)-knockout mice [149, 151, 153, 154] are examples of mouse models with innate immune deficiency, and such animals display severe effects of the disease, such as pneumonitis, bronchiolitis, and weight loss, and often die within 9 days of infection. Notably, young mice require more mutations and passages than aged mice to produce SARS-CoV mouse-adapted strains. More severe pathological lesions and increased mortality were observed in one-year-old animals, along with fewer mutations at miscellaneous locations throughout the genome [98, [155] [156] [157] [158] [159] . Intranasal inoculation of four-to eight-week-old BALB/c or B6 mice with SARS-CoV resulted in nasal turbinate in the upper respiratory tract and a high titer of virus replication in the lungs of the lower respiratory tract, and this model was highly reproducible without any signs of morbidity or mortality [149, 151] . Neutralizing antibody responses could be generated in sub-lethally infected mice protecting recipients from subsequent lethal challenges, which probably reflected the situation in infected humans during an epidemic [160] . However, on day 2-3 post-infection (pi), virus replication in the respiratory tract peaked but was not accompanied by massive pulmonary inflammation or pneumonitis. By day 5-7 pi, the virus had been eliminated from the lungs [149, 151] . It was obvious that viremia is common and long-lasting in patients, while it is rare and transient in mouse models [161] . Mice could therefore be used as a stable and reproducible animal model for the evaluation of vaccines, immune-prophylaxis, and antiviral drugs against SARS-CoV [81, 96, 109, 124, 149, [162] [163] [164] [165] [166] . Golden Syrian and Chinese hamsters have also been evaluated and shown to be excellent models of SARS-CoV infection, owing to their high titer of virus replication in the respiratory tract, associated with diffuse alveolar damage, interstitial pneumonitis, and pulmonary consolidation [104, [167] [168] [169] . On day 2 pi, peak levels of viral replication were detected in the lower respiratory tract, and the virus was cleared without obvious clinical illness 7-10 days after infection. Similarly to mice, infected hamsters also produced a protective neutralizing-antibody response to subsequent SARS-CoV challenges [104, 170] . Resulting from the extremely high titers and reproducible pulmonary pathological lesions in SARS-CoV-infected hamsters, this animal model is ideal for studies on the immunoprophylaxis and treatment of SARS [104, 170] . However, there are still limited resources in terms of genetically established animal lines and accurate immunological and cellular biomarkers for hamster models. Ferrets were found to be susceptible to SARS-CoV infection [171] but could also transmit the virus at low levels by direct contact [84, [172] [173] [174] . They showed diverse clinical symptoms in different studies [171, 174] . Importantly, ferrets could develop fever, which is a characteristic clinical symptom of SARS-CoV-infected patients [93, 175] . Similar to rodent models, infection of ferrets with SARS-CoV did not result in significant mortality. However, there are still some conflicting reports regarding the histopathological lesions and severity of clinical observations in the ferret model that require further investigation. Several species of non-human primates (NHP) were evaluated as animal models for SARS. At least six NHP species were tested including three Old World monkeys: rhesus macaques [176] [177] [178] [179] [180] , cynomolgus macaques [177, 181, 182] , and African green monkeys [177] ; and three New World monkeys: the common marmoset [183] , squirrel monkeys, and mustached tamarins [176] [177] [178] [181] [182] [183] [184] . Except for squirrel monkeys and mustached tamarins [185] , all of the evaluated NHP species facilitated the replication of SARS-CoV [186] . Virus replication was detected in the respiratory tract of rhesus macaques, cynomolgus macaques, and African green monkeys. Pneumonitis was observed in each of these species in different studies [176] [177] [178] 182] . SARS-infected common marmosets displayed a fever, watery diarrhea, pneumonitis, and hepatitis [183] . Unfortunately, research into the clinical signs of disease in cynomolgus and rhesus macaques gave conflicting results and therefore needs further investigation. The main reason for the lack of reproducibility in such studies may be the limited sample size. Small animal models of MERS infection are urgently needed to elucidate MERS pathogenesis and explore potential vaccines and antiviral drugs. Previous studies have demonstrated the difficulties in developing such a model, such as that mice [187, 188] , ferrets [134] , guinea pigs [189] , and hamsters [189] are not susceptible to experimental MERS-CoV infection mainly because their homologous DPP4 molecules do not function as receptors for MERS-CoV entry. After administering a high dose of MERS-CoV, no viral replication could be detected in these animals [190] . In an animal model using New Zealand white rabbits, regardless of the fact that detectable viral RNA existed in the respiratory tract and moderate necrosis was observed in nasal turbinates, the animals showed no clinical symptoms of disease [191] . In another study, attempts to infect hamsters with MERS-CoV were not successful [192] . However, despite this, MERS-CoV is a broad host-range virus in vitro [25] , and there is hope that a reproducible and stable animal model for human MERS-CoV infection can be improved in the near future. Despite the fact that wild-type rodents are not susceptible to MERS-CoV infection [188] , researchers have developed several models in which mice are susceptible to MERS-CoV infection [193] [194] [195] . The first mouse model of MERS infection reported in 2014 involved transducing animals with recombinant adenovirus 5 encoding human DPP4 (hDPP4) molecules intranasally, and this resulted in replication of MERS-CoV in the lungs. This mouse model also showed clinical symptoms of interstitial pneumonia, including inflammatory cell infiltration, and thickened alveolar and mild edema [195] . However, there are certain limitations to this model, such as the uncontrolled expression and distribution of hDPP4. In 2015, the establishment of hDPP4 transgenic mice was reported [194] . MERS-CoV could infect this mouse model effectively. However, similarly to SARS-CoV-infected ACE2 transgenic mice [196] , systemic expressions led to multiple organ lesions [194] , resulting in the death of the animals. Most recently, the homologous hDPP4 gene was used in several MERS transgenic mouse models [193, 197] . Remarkably, hDPP4 knockin (KI) mice, where mouse DPP4 gene fragments had been displaced by homologous human DPP4 fragments, showed effective receptor binding. Furthermore, a mouse-adapted MERS-CoV strain (MERS MA ) including 13-22 mutations was produced in the lungs of hDPP4-KI mice after 30 serial passages, causing effective weight loss and mortality in this mouse model [193] . Both this hDPP4-KI mouse and the MERS MA strain provide better tools to explore the pathogenesis of MERS and potential novel treatments. As a reservoir of MERS-CoV, dromedary camels showed mild upper respiratory infections after the administration of MERS-CoV [198] . Oronasal infection of MERS-CoV in alpacas, a close relative within the Camelidae family, resulted in an asymptomatic infection with no signs of upper or lower respiratory tract disease [199, 200] . Additionally, owing to their high cost and relatively large size, these animal models are not available for high-throughput studies of MERS. NHPs, such as the rhesus macaques [201] and common marmosets [202] , are useful models for studying the pathogenesis of mild MERS-CoV infection and evaluating novel therapies for humans, although the degree of replication and disease severity vary [192, 201, 203, 204] . MERS-CoV caused transient lower respiratory tract infection in rhesus macaques, with associated pneumonia. Clinical signs were observed by day 1 pi and resolved as early as day 4 pi [201] . Relatively mild clinical symptoms were observed early on in infection without fatalities, indicating that rhesus macaques do not recapitulate the severe infections observed in human cases; however, treatment of MERS-CoV-infected rhesus macaques with IFN-α and ribavirin decreased virus replication, alleviated the host response, and improved the clinical outcome [205] . Infection of MERS-CoV in common marmosets demonstrated various extents of damage depending on the study, but successfully reproduced several features of MERS-CoV infection in humans. Importantly, one study indicated that the infection became progressive severe pneumonia [203] , while other groups found that MERS-CoV-infected common marmosets only developed mild to moderate nonlethal respiratory diseases by intratracheal administration [206] . The reasons for host restriction, none or limited clinical symptoms observed in varies animal models are complexity. The interaction between the host receptor and functional proteins of SARS and MERS, respectively, plays an important and predominant role. In the context of animal models of SARS-CoV infection, researchers have compared the ACE2 amino acids that interact with the S protein RBD from several species. In agreement with the permissive nature of these species, the ACE2 residues of marmoset and hamster are similar to those of hACE2 [53] . By comparison, many residues of mouse ACE2 are different from those of hACE2, and this meets with decreased replication of SARS-CoV in mouse cells [207] and the lungs of young mice [149] . The changes at positions 353 (histidine) and 82 (asparagine) of rat ACE2 relative to hACE2 partially disrupt the S protein-DPP4 interaction and contribute to abrogation of binding. Interestingly, ferrets are permissive to SARS-CoV infection, but most of their ACE2 interaction residues are different from those of hACE2 [53] , while many of the ACE2 residues between civet and ferret are the same, which may result in similar affinity [208] . For MERS-CoV, 14 residues of the S protein RBD have direct contact with 15 residues of hDPP4 [57] . Comparisons of human DPP4 binding affinity to that of other species indicated that human DPP4 had the highest affinity to the S protein of MERS-CoV, where the decreasing order of affinity is as follows: human > horses > camels > goats > bats [209] . Further evidence demonstrates that the host restriction of MERS-CoV remarkably depends on the sequence of DPP4, such as the characterization of amino acid residues at the connector of DPP4 with the RBD of S proteins in mice [187, 210] , hamsters, and cotton rats [210] . However, the multiplicity in severity of disease between rhesus macaques and common marmosets indicate that other host factors can perhaps affect the infection and replication of the virus, such as the presence of S-cleaving proteases [187] . In general, although the structural analysis of receptors-S protein interactions cannot fully explain all the observations for host restriction, they agree with the improved replication in several animal models and that it should be the premier and remarkable focus of small-animal model development. These special residues for host affinity are important to build up transgenic animal models enhancing the permissiveness and infection of SARS and MERS. Unlike SARS-CoV, which resolved without more reported cases, continued outbreaks of MERS-CoV present an ongoing threat to public health. It should be noted that no specific treatment is currently available for HCoVs, and further research into the pathogenesis of HCoV infection is therefore imperative to identify appropriate therapeutic targets. Accordingly, at present, the prevention of viral transmission is of utmost importance to limit the spread of MERS. The enormous ratio of nosocomial infections indicates that preventive measures in hospitals have not been sufficiently implemented. Additionally, as an emerging zoonotic virus, prevention of transmission from dromedary camels is another possibility to reduce the quantity of MERS cases. Regarding clinical therapies, a combination of treatment administered as early as possible and aimed at synchronously disrupting viral replication, inhibiting viral dissemination, and restraining the host response is likely to be most suitable, due to the acute clinical features of MERS with diffuse lung damage and the important role of immunopathology. Potential treatments must undergo in vitro and in vivo studies to select the most promising options. The development of stable and reproducible animal models of MERS, especially in NHPs, is therefore a decisive step forward. The next step in the development of standardized and controllable therapies against SARS and MERS will be clinical trials in humans, validating a standard protocol for dosage and timing, and accruing data in real time during future outbreaks to monitor specific adverse effects and help inform treatment. The comprehensive lessons and experiences that have resulted from the outbreaks of SARS and MERS provide valuable insight and advancements in how to react to future emerging and re-emerging infectious agents. Rapid identification of the pathogen via effective diagnostic assays is the first step, followed by the implementation of preventive measures, including raising awareness of the new agent, reporting and recording (suspected) cases, and infection control management in medical facilities. Studies are currently needed that focus on the epidemiology of these organisms, especially in terms of pathogen transmission and potential reservoirs and/or intermediate hosts. Animal models and prophylactic and therapeutic approaches should be promoted, followed by fast-tracked clinical trials. Our increasing understanding of novel emerging coronaviruses will be accompanied by increasing opportunities for the reasonable design of therapeutics. Importantly, understanding this basic information will not only aid our public health preparedness against SARS-CoV and MERS-CoV, but also help prepare for novel coronaviruses that may emerge.
Several countries have applied exemptions of respiratory patients on the compulsory use of face masks indoor and outdoor during the coronavirus disease 2019 pandemic. It must be strongly stated that such exemption is not evidence-based, and it may carry increased risk of personal infection to the estimated 544·9 million people worldwide suffering a chronic respiratory disease. i Beyond hand hygiene and physical distancing, face masks are fundamental for personal and group protection to prevent the spread of infection both in patients and in their caretakers. ii Ultimately, human behavior is certainly the main determinant of the spread or containment of the disease. Considering that the virus spreads largely through the respiratory tract, experts are proposing that beyond protecting others, face masks help their wearers. In the new (although not fully demonstrated) COVID-19 inoculum theory, it is proposed that universal masking reduces the inoculum or dose of the virus for the wearer, leading to milder or asymptomatic infection. iii,iv Many countries have already defined national policies to implement a compulsory use of face masks (Figure 1) . v Further, several countries have instituted penalties for non-compliant individuals. But there are exceptions. In Spain since May 21, 2020, vi face masks must be worn in the "public street, in open-air spaces and any closed space that is for public use or that is open to the public, where it is not possible to maintain [an interpersonal] distance of two meters". According to the Spanish order, people with respiratory problems, or those who cannot wear masks for other health reasons or due to a disability, are exempt from wearing them. In the US, certain "Face Mask Exemption Cards" are already circulating. vii At this stage, it is important to address the question: Are there medically justified exemptions for face coverings?. Relieving respiratory patients from the obligation to wear masks could be highly deleterious for them, since by definition those patients with respiratory conditions who cannot tolerate face masks are at higher risk of severe COVID-19. Although facemasks undoubtedly enhance breathing resistances, the degree of discomfort experienced by some patients is influenced by its affective component. Dyspnea is a sensation, and supratentorial affects such as anxiety and claustrophobia might cause the added sensation of 'being unable to breathe' with a mask. Indeed, the WHO states that face masks of breathable material, worn properly, will not lead to health problems. Whether persons not wearing facemasks play a role in the persistence or resurgence of COVID-19 in many countries is not firmly established. We must acknowledge that there is not a body of evidence (yet) to support the proposed approach of universally recommending facemasks in public. Any statement suggesting that all types of face masks have a protective role, needs to be accompanied by the underpinning need to differentiate their diversity. Similar to 'drugs', the efficacy of face masks depends highly on a number of characteristics, some of which have been formally assessed and some have not or cannot easily (e.g. comfort, social acceptance, …). The second waves being experienced globally, despite widespread masking, confirm masks are insufficient interventions. With all likelihood, it is about a large number of issues, not just masks. At this time, professional associations have not provided clear recommendations on exemptions (or lack thereof). Within the Respiratory Effectiveness Group, we do not see asthma, COPD and other respiratory diseases as an impending factor to wear a face mask, unless the person is in active acute respiratory distress, in which case going out in public is not advised. Therefore, we propose the cautionary step not to exempt respiratory patients on the compulsory use of face masks. Our duty remains to encourage patients to follow strictly the measures aiming at protecting them from getting or transmitting the disease. Adaptations of their activities (less time spent in public spaces) may be required to decrease the time during which they need to wear a face mask, and whenever possible other protective measures could be privileged (social distancing). COVID-19 is a new, devastating, but potentially preventable disease, and a key priority is to identify the combination of measures that minimizes societal and economic disruption while adequately controlling infection. viii It is crucial for patients with respiratory conditions to wear face masks when they are in public spaces where social distancing cannot be applied easily. ix Developing new models of face masks dedicated at patients with impaired lung function could be of help. Footnote: GCSI ranges from 0 to a 100, and includes face mask population coverage
It is well known that the implementation of evidence-based stroke care guidelines can effectively improve outcomes and prevent recurrence in patients with stroke. 1 In 2010, Taiwan implemented a nationwide collaborative model called the Breakthrough Series (BTS)-Stroke activity, adapted from the Get With The Guideline-Stroke program; this significantly improved outcomes on quality measures of acute ischemic stroke (AIS) care. 2 During the coronavirus disease 2019 (COVID-19) pandemic, routine care of stroke may be compromised because of realloca-tion of medical resources. In Taiwan, the first confirmed COV-ID-19 case was reported on January 21, 2020. Because of the Taiwanese government's aggressive containment efforts, 3 the cumulative number of COVID-19 cases, as of May 2020, was as low as 442. Whether the number of daily admissions and quality metrics for stroke care changed during the COVID-19 pandemic period warrants investigation. We retrospectively analyzed registry-based data from 18 hospitals in Taiwan, including seven medical centers and 11 community hospitals. The 18 hospitals were distributed in Taiwan's different administrative districts and contained >65% of the total Table 1 ). All the hospitals had participated in the BTS-Stroke activity. 2 The performance measures and safety indicators were modified from the original BTS-Stroke quality metrics established in 2010 (Supplementary Table 2 ) and were reviewed monthly. Individual patient-level information was de-identified before analysis. Number of monthly admissions of stroke (including ischemic and hemorrhagic stroke) and 15 quality-of-care metrics were compared between the main outbreak (March 2020), early outbreak (January and February 2020), and control (January to March 2019) phases, respectively. Since the BTS-Stroke activity mainly focused on the AIS-related quality metrics, number of AIS admission were further recorded. Detailed methods and statistical analyses are presented in the Supplementary methods. As the cumulative number of COVID-19 cases increased, there was a significant decrease in mean daily stroke admissions in the first quarter of 2020 (β=−0.07, P<0.001), which was not ob-served in 2019 (β=−0.03, P=0.13) ( Figure 1A ). Similar trends were observed in medical centers (β=−0.07, P=0.007) and community hospitals (β=−0.07, P=0.02) ( Figure 1B) . The comparison between the first quarter of 2019 and 2020 was presented in Table 1 . The number of daily stroke admission were decreased in 2020 compared with 2019 (41.2 vs. 44.3; incidence rate ratio [IRR], 0.93; P=0.001) as well as AIS admission (29.9 vs. 32.6; IRR, 0.93; P=0.001). The quality metrics were generally comparable, and several metrics of intravenous thrombolysis, endovascular thrombectomy, early and discharge antithrombotic use, and rehabilitation evaluation even improved in 2020. P=0.001) increased in the main outbreak phase. The quality metrics of rehabilitation evaluation and stroke education also improved. We found that most stroke quality measures during the current study periods considerably improved compared to the initial BTS-Stroke activity implementation period of 2010 to 2011. 2 More importantly, the overall quality of acute stroke care was well-maintained or even further improved for several metrics during the early and main outbreak periods, indicating that the effect of the quality improvement program persists over time. As expected, stroke admissions in Taiwan decreased by approximately 13% to 16% in the main COVID-19 outbreak phase in the main COVID-19 outbreak phase. However, the reduction appears much less than the global average of 42% reduction reported by the World Stroke Organization. 4 During the outbreak, patients with mild stroke symptoms may be less willing or may took longer time to visit the hospital. 5 Our data showed a trend of decreasing proportion of mild stroke (National Institutes of Health Stroke Scale [NIHSS] <4; 40.2% vs. 42.6%; OR, 0.91; P=0.07) and mild to moderate stroke (NIHSS <10; 73.3% vs. 76.0%; OR, 0.87; P=0.02) in 2020 compared with 2019. Besides, the number of early arrivals was higher in 2020 than 2019; these patients most likely had considerable neurological signs and were thus sent to hospitals earlier. The proportion of patients receiving acute reperfusion therapy did not decrease in 2020, suggesting that the quality of acute intervention of stroke were still maintained during the pandemic. When encountering an outbreak of a highly contagious disease, the performance of timely and emergent acute stroke care could be compromised. Modification of the hyperacute stroke management protocol has been advocated during this pandemic in many countries, including Taiwan. 6, 7 In this study, the proportion of patients with a door-to-computed tomography time ≤25 minutes was lower in the main outbreak phase, which would have resulted in delaying hyperacute stroke management. 8 Nevertheless, the proportion of patients with a door-to-needle time ≤60 minutes in our study was not affected, suggesting that the participating hospitals made their best effort to adhere to hyperacute stroke protocols. The main limitation of our study was that we were able to use month-based hospital-level data only, and detailed individual patient-level data such as demographic profiles and stroke severities could not be analyzed. In addition, Taiwan was far less severely affected by the pandemic compared with other countries, hence the generalizability of our results should be taken into consideration. In conclusion, we showed that the collateral adverse effect on stroke admission even in a country less affected by COVID-19. Well-implemented performance improvement program could lead to a fair maintenance of stroke care quality even during the public health crises. The data covered 18 hospitals in Taiwan's different administrative districts; these districts together contain >65% of the total population. The enrolled hospitals included seven medical centers and 11 community hospitals, and the corresponding principal investigators were all members of Taiwan Stroke Society (Supplementary Table 1 ) and participants of the BTS-Stroke activity, where they received training in the measurement of quality and safety from trained neurologists, study nurses, and stroke case managers. The details of the training and data collection process involved in the BTS-Stroke activity has been reported previously. 6 Individual patient-level information was de-identified before analysis. This study was approved by the National Taiwan University Hospital Research Ethics Committee, No. 202004035RINA. The original BTS-Stroke quality metrics were established in 2010. It included 14 performance measures and safety indicators. These indicators are the percentage of (1) patients presenting with stroke symptoms for <2 hours who have a door-to-computed tomography time ≤25 minutes; (2) patients who arrived at the participating hospital <2 hours after symptom onset who use an intravenous tissue plasminogen activator (IV-tPA; IV-tPA for early arrival); (3) patients with acute ischemic stroke (AIS) who receive IV-tPA treatment; (4) patients who arrived <2 hours after symptom onset who have a of door-to-needle time ≤60 minutes; (5) patients who underwent IV-tPA treatment who developed symptomatic intracerebral hemorrhage (ICH); (6) patients with AIS who receive intraarterial thrombolysis; (7) patients who use antithrombotic medication use ≤48 hours upon admission (early antithrombotic use); (8) patients who undergo dysphagia screening before any oral intake; (9) patients with atrial fibrillation who are prescribed oral anticoagulants at discharge; (10) patients with a lipid-lowering drug prescription for low-density lipoprotein ≥100 mg/dL at discharge (lipid-lowering drug use); (11) patients with an antithrombotic prescription at discharge (antithrombotic use at discharge); (12) patients who are evaluated for stroke rehabilitation services (rehabilitation evaluation); (13) patients (and/or caregivers) who undergo stroke education (stroke education); and (14) patients with stroke who have a 30-day mortality. Since 2015, endovascular thrombectomy (EVT) has become the standard treatment for patients with AIS with large vessel occlusion. Therefore, we replaced intraarterial thrombolysis with EVT and added metric 15: symptomatic ICH after EVT (Supplementary Table 2 ). Stroke severity, represent-ing by National Institutes of Health Stroke Scale (NIHSS), was not included in the BTS-Stroke activity. Nevertheless, we collected patients' NIHSS score according to four strata as <4, 4 to 10, 11 to 20, and >20. All quality metrics were reviewed on a monthly basis in each participating hospital. Furthermore, the total number of monthly stroke admissions (including those of AIS, transient ischemic attack and hemorrhagic stroke) were recorded. Since the BTS-Stroke activity mainly focused on the AIS-related quality metrics, we further recorded number of AIS admission. The study period was January 1 to March 31, 2020, and the control period was January 1 to March 31, 2019. Coronavirus disease 2019 (COVID-19) statistics were collected from the bulletins and press releases of the Central Epidemic Command Center (CECC), a specialized task force under Taiwan's Centers for Disease Control. We collected the daily numbers of confirmed cases (reported from home quarantine and enhanced surveillance). The mean daily stroke and AIS admissions were calculated from data on their monthly counterparts (monthly figures/number of days of the month). The changes over the 3 consecutive months were estimated using a generalized estimating equation. The difference in the mean values between different study periods were compared using Poisson regression and expressed as incidence rate ratios and their 95% confidence intervals. All quality metrics were represented as percentages (%). Because the denominator in these percentages may change according to which patients are covered by any given quality metrics, both the numerator and denominator are reported in the results. When appropriate, the chi-square test or Fisher's exact test was used to compare quality metrics between the study and control periods. Logistic regression analysis was used to calculate the odds ratios for quality of care between the 2 periods, and the penalized maximum likelihood (Firth method) was used for parameter estimation to determine the likelihood of rare outcomes. 7 In brief, we first compared the first quarters of 2019 and 2020 with respect to the mean numbers of daily stroke admission of stroke and the quality-metric percentages. Prespecified subgroup analyses were performed for medical centers and community hospitals. Because the number of confirmed COV-ID-19 cases increased substantially in mid-March, we considered March 2020 as the most affected month (i.e., the main outbreak phase). Thus, we further compared data for March
The Coronavirus Disease 2019 , first identified in Wuhan, China in December 2019, has spread globally (Sun et al. 2020) , overwhelming many national health care systems with an increasing number of serious and potentially life-threatening infections (Anderson et al. 2020) . As a result, the majority of governments have imposed mitigation and suppression strategies, such as social distancing and lock-down measures, to better control the spread of the virus. To ensure the effectiveness of these strategies, various epidemiological models are used to predict the spread of COVID-19 and to inform government policies (Hellewell et al., 2020; Sameni, 2020; Radulescu and Cavanagh, 2020; Simha et al. 2020; Zhao and Chen, 2020) . Most commonly, these models follow a general Susceptible-Infected-Removed (SIR) framework (Kermack and McKendrick, 1927; Bailey, 1975; Sameni, 2020) . Infected individuals can only transmit the virus to susceptible individuals. Once infected individuals have recovered (or have passed away) they can no longer infect others and cannot be reinfected. With regards to COVID-19, the SIR model is often expanded to an SEIR model, which considers an additional Exposed (E) stage during which individuals have been infected but are not yet contagious (Radulescu and Cavanagh, 2020; Wu et al. 2020) . Many additional parameters, including clustering (Luo et al., 2020) , age-heterogeneity (Chang et al., 2020; Radulescu and Cavanagh, 2020) , changes in policy and control measures (Sameni, 2020; Zhao and Chen, 2020) , and even meteorology (Jia et al., 2020) have been incorporated into the SIR and SEIR framework in an attempt to increase the predictive power of these models. Furthermore, there are models that consider an inherent randomness or stochasticity in the events that influence the model outcomes (Hellewell et al., 2020; Kucharski et al., 2020; Simha et al. 2020 ). However, none of the above models, currently predicting the COVID-19 pandemic, take into consideration the structure underlying the human interaction network. Networks can be used to represent connections between individuals. A network of 10,000 nodes, for example, can be used to describe 10,000 individuals. The nodes in the network are connected by edges, which represent the interaction between those nodes. Two nodes sharing the same edge are considered to directly interact with one another and are referred to as neighbouring nodes. The number of edges of a node is referred to as the degree of that node (e.g. a node with two edges has a degree of 2). Nodes with many edges (and thus many neighbours) are referred to as "hubs". It is critical to note that from a network analysis perspective, hubs do not necessarily refer to social gatherings. Hubs refer to individuals with many connections to other individuals; these connections do not necessarily occur at the same time. For example, a doctor who individually meets with 30 patients per day is considered a hub of 30 (i.e. a node of degree 30). Current COVID-19 models are based on differential equations or random diffusions which assume that human interaction behaviours are generally homogeneous (i.e. alike). It is, however, welldocumented that many biological interactions, including human face-to-face communications, are not homogeneous and not random (Barabasi, 2009; Cattuto et al., 2010; Zhao and Bianconi, 2011; Zhang and Li, 2012) . In fact, the degree distribution of nodes (which captures differences in the number of connections between nodes) in a biological network can often be described by a powerlaw (Barabasi and Albert, 1999) . A power-law describes a relationship whereby one quantity varies as the power of another (e.g. the area of a square quadruples when the size of its sides is doubled). In networks, this property arises when there are few nodes with many edges and many nodes with few edges. Networks with this kind of property are known as scale-free. The rate of infection of COVID-19 has previously been shown to follow a power-law (Ziff and Ziff, 2020; Li et al., 2020 ); yet, so far scale-free networks have not been considered to model the disease. When a disease spreads through a network, the infected nodes (I) can spread the disease only to their susceptible neighbours (S), who can then spread the disease to their susceptible neighbours, and so on. Using network science, Dezso and Barabasi (2002) have previously identified that the best possible way to stop a virus from spreading in a scale-free network is to bias policies towards hubs (i.e. heavily connected nodes) in the network. They showed this using a Susceptible-Infected-Susceptible (SIS) model -a model that is arguably more suitable for the spread of computer viruses than infectious diseases. Here we re-assess the predictions of Dezso and Barabasi (2002) using an SIR model, with the model parameters tailored specifically towards COVID-19. Our results demonstrate how mitigation strategies that directly target hubs in the network are far more effective than strategies that randomly decrease the number of connections between individuals. For example, reducing the total number of interactions that each individual in the network can have is potentially more effective than mitigating the number of interactions that an individual can have at the same time. Although we have chosen model parameters that are based on the current COVID-19 pandemic, the model results cannot be considered a reliable prediction of the spread of this pandemic. Instead, they illustrate how network topology can improve the predictive power of such models. We propose that network topology should be combined with dynamic approaches in order to strengthen the predictive power of future pandemic models (Piccardi and Casagrandi, 2009 ). We further demonstrate how network topology can be used to suggest mitigation and containment strategies. In this sense, the results are generally applicable to a wide range of contagious infectious diseases. We set up a scale-free network using the Barabasi and Albert (1999) algorithm for a population of 10,000 (see Method and Materials for details). The network is set up such that the majority of nodes in the network have less than n connections. Due to the scale-free structure of the network, there are some nodes that are heavily connected and have far more than n connections; these nodes are hereafter referred to as "hubs". The value of n is an arbitrary cut-off and is dependent on the network structure. Here, we set n=8 so that hubs represent the 5-10% most connected nodes (Fig. S1 ). All nodes in the network were initially set to be in the susceptible (S) state. Randomly, the state of one node was changed to the infected (I) state, representing patient zero. Thereafter, at each time point, the infected node(s) can infect any of their neighbouring nodes, changing their state from S to I. This transmission can occur from 2-14 days after a node was infected (Lauer et al. 2020 ). The probability of transmission is highest from the 4 th to the 6 th day after being infected . After the 2-14 days of possible transmission, infected nodes (I) change to the removed (R) state. Nodes in the R state cannot become reinfected and can no longer infect others. We then ran one set of models using the original scale-free network. We ran another set of models for which all hubs in the network had been contained; we randomly selected and retained 8 edges of each hub and removed all others. In this mitigated scenario, all nodes have a degree of 8 or less; we refer to this scenario as "Mitigation Hub". To compare mitigation strategies that specifically target hubs to general mitigation strategies, we calculated the average number of edges that were removed in the mitigation strategy. We generated a third set of networks where we removed the same number of edges randomly within the scale-free network; we refer to this scenario as "Mitigation Random". We ran the SIR model on all three network types (Scale-free, Mitigation Hub and Mitigation Random). Figure 1 demonstrates how the number of susceptible, infected and removed nodes changes over time. There are two general outcomes in all three models: (i) the infection dies out quickly and does not spread throughout the network, or (ii) the number of removed nodes becomes high enough such that the spread of the infection is contained. In outcome (i) the majority of the population remains susceptible; in (ii) the majority of the population is infected and removed (i.e. has recovered or passed away). In outcome (ii) the population achieves a state known as "herd immunity" (Anderson and May, 1986; Kwok et al., 2020) , whereby a high enough percentage of the population is immune to stop the virus from spreading. An illustrative example of the outcomes is provided, from a network point of view, in Figure S2 . The distributions of the final state of the nodes in all model outcomes are shown in Figure 2A . The outcomes of the Mitigated Hub and Mitigated Random networks are similar in that they require a lower number of removed nodes to achieve herd immunity when compared to the outcomes of the scale-free network. However, the outcome of achieving herd immunity is also less likely when running the SIR model on the two mitigated networks, than on the original scale-free network ( . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint . where edges were removed from the scale-free network such that all nodes have no more than 8 connections, Mitigated Random: where the same number of edges were removed randomly). (B) final states of the two model outcomes (No Herd Immunity -where the infection dies out early, before spreading, and the majority of the population remains susceptible; Herd Immunity -where the infection stops spreading because the majority of the population is immune). 5,000 model simulations were run for 170 days. When running the SIR model on the scale-free network, over 80% of the population needs to become infected in order for herd immunity to be achieved (Fig. 1A) . In the mitigated scenarios herd immunity can be achieved with around 70-75% of the population becoming infected (Fig. 1B,C) . This is because the number of connections between nodes is limited. When specifically limiting the connections of hubs, the rate at which the virus spreads through the network is slowed down (Fig. 1B & Fig. 3) . When removing hubs from the network, the onset of more than 1% of the population being infected occurs later than with any of the other networks (Fig. 3A) . Also, in the Mitigated Hub scenario, the rate of infection is slower such that, at the peak of infection, a significantly lower percentage of nodes is infected with the virus (p < 0.01) (Fig. 3B ). This means that when the SIR model is run on the Mitigated Hub network, the time for which more than 1% of the population is infected is longer and the peak of infection occurs later in time (Fig. S3) ; thus, the curve is flatter (Fig. 1B) . The onset of more than 1% of the population being infected occurs just after the first hub has been infected (Fig. 3A) . For the Mitigated Hub network (which contains no hubs), the time of onset at which more than 1% of the population becomes infected is more varied, as indicated by the wider distribution shown in orange. Time at which more than 1% of the population is infected in the Herd Immunity outcome. The SIR model was run on the three networks (Scale-free: the original network; Mitigated Hub: where edges were removed from the scale-free network such that all nodes have no more than 8 connections; Mitigated Random: where the same number of edges were removed randomly). (A) frequency distributions of the onset of time after which more than 1% of the population is infected for the three networks (Scale-free -red; Mitigation Hub -orange; Mitigation Random -brown) and of the time at which the first hub (i.e. the first node with more than 8 connections) is infected in the scale-free network (blue) and in the Mitigation Random network (purple). (B) the number of nodes that are infected at the peak of the infection divided by the final total number of removed nodes, shown as a percentage. For both (A & B) , 5,000 model simulations were run for 170 days. All our networks consist of a population of 10,000 individuals; thus, representing a small community. Including more individuals into the networks would not change the proportionality of our results. However, previous network analyses have suggested that networks representing human interactions across wider geographical regions should consider transitivity (Serrano and Boguna, 2006; Friemel, 2011) . Transitivity is a measure of the density of connections in a network (Wasserman and Faust, 1994) . The higher the transitivity of a network, the more dense are its connections. We calculated the transitivity for all our generated networks and found that when network transitivity is high, the peak of infection rises and ceases sooner (Fig. 4) . Networks with no hubs (Mitigation Hub) are the least densely connected network and thus the peak of infection is flatter (Fig. 1 & 4) . The effect of network transitivity on the spread of an infection. Transitivity of the generated networks (Scale-free in red -the original network; Mitigated Hub in orange -where hubs with more than 8 connections have been removed; Mitigated Random in brown -where edges have been removed randomly) plotted against (A) the time when more than 1% of the population is first infected in the herd immunity outcome and (B) against the total time for which more than 1% of the population is infected in the herd immunity outcome. 300 model simulations were run for 170 days. We have shown that incorporating a scale-free network structure of human interactions into an SIR model of COVID-19 can provide novel insights into the potential strategies and policies for mitigating and suppressing the spread of the virus. Our results demonstrate that targeting hubs in a network has the potential to slow down the rate of infection and to "flatten the curve". Removing hubs from a scale-free network markedly reduces the number of infected individuals at a time and could therefore reduce the strain on health care systems, if implemented early on. We show that removing edges from hubs rather than from random locations in the network is a more successful strategy for slowing down the spread of the virus. Thus, limiting the total number of interactions that each individual in a network can have, could be an effective policy when trying to reduce the number of infections at a time. We further demonstrate that the onset of the peak of infection occurs shortly after the first hub is infected. If, by chance, no hub is infected early on, then the infection dies out or the peak is noticeably delayed until a hub is infected. These results align with data coming from Italy, which suggest that the COVID-19 epidemic must have entered the region far earlier than when the first case was officially detected (Cereda et al., 2020) . According to their study, the virus could have circulated for several weeks with the number of infected people remaining low, until it started to rise sharply. Our results further align with those obtained by Dezso and Barabasi (2002) who modelled the spread of a virus using an SIS model and suggested targeting hubs as a primary mitigation strategy. Their model, however, does not provide the option of a herd immunity outcome as it does not consider a removed state. All of our SIR models present two possible outcomes: either the infection dies out quickly and the majority of the population remains susceptible or the majority of the population gets infected and herd immunity forces the virus to stop spreading. If vaccines are available, herd immunity can be achieved quicker. Vaccines would allow nodes in the network to move from a susceptible straight into a removed (i.e. recovered) state. Unfortunately, no vaccine for COVID-19 is available at the time of writing (Amanat and Krammer, 2020; Pompetchara et al., 2020) . We did not opt for an SEIR model, but instead set the probability of infection very low for the first few days after contracting the virus. This matches the current available data on COVID-19 and has a similar effect as an SEIR model would have. Furthermore, we did not distinguish between recovered and passed away in the models. From a modelling perspective, whether a node recovers or dies has the same implication: it cannot get reinfected nor can it spread the virus. From a human health perspective, the fact that the number of removed nodes is likely to include a number of deaths (depending on the fatality of the virus), paints a potentially sombre picture for the model outcomes in which herd immunity is achieved. For COVID-19, the current case fatality rates of Germany and Italy were calculated to be 0.2% and 7.7%, respectively (Lazzerini and Putoto, 2020) . Evidently, these figures are dependent on the number of cases tested as well as other factors. Anyway, the potential magnitude of fatality rates implied by these numbers does suggest that mitigation strategies that lower the number of infected people required for herd immunity may be necessary to keep high-risk citizens safe from the infection. In this case, a combination of hub specific and equally applied mitigation (i.e. random) strategies should be considered. Hub specific mitigation strategies are more likely to lower the peak of infection, whereas equally applied mitigation strategies are more likely to increase the number of individuals that never contract the disease. Evidently, the results of our model come with a set of assumptions that are not necessarily valid under the current and evolving circumstances of the spread of COVID-19. For example, our network analysis considers the number of connections to be constant across time. This however is unlikely, given that even without government enforced policies, individuals are likely to reduce their face-to-face interactions once they fall ill. Workers providing essential services, like healthcare and delivery personnel, on the other hand, may increase their number of interactions with the spread of the virus. This could result in hubs becoming even more connected over time. Furthermore, both our discussed mitigation strategies are applied before the onset of the infection, in order to illustrate a point. A more pragmatic model should consider how network dynamics and policies enforced during the pandemic can alter the degree distribution of the network over time (Piccardi and Casagrandi, 2009; Barrat et al. 2013) . We showed that the transitivity in a network (i.e. the density of its connections) correlates with the sharpness of the infection peak. The onset of the peak occurs later and is slowed down in networks where hubs have been removed, because these networks have a reduced transitivity. However, our results did not take into account modularity, also known as community structure. Previous research has shown that a community structure in networks can reduce the risk of a pandemic (Eguiluz and Klemm, 2002; Huang and Li, 2007; Stegehuis et al., 2016) . We therefore recommend that, when upscaling our approach to model the spread of an infection across a wider geographical network, community structures are implemented. In conclusion, we here present a fundamental example for why the network structure that underlies human interactions should be taken into consideration when modelling the spread of a virus, such COVID-19. We emphasize that the model results should not be used as predictions of the spread of COVID-19 but as a guide on how current epidemiological models could be improved. Incorporating network science with the current dynamic models of COVID-19 is likely to improve their predictive power. Doing so will allow for better-informed suggestions for disease mitigation and suppression, such as reducing the number of hubs in a network. Strategies that consider the underlying network structure of human interactions will allow the implementation of more tailored policies, attempting to effectively deal with COVID-19 and other pandemics. Barabasi and Albert (1999) graphs of 10,000 nodes were generated using the igraph (Version 0.8.0) package in Python (Version 3.6.9) with a power of 1 and an edge per node connectivity of 2. These are scale-free networks and the node degree distribution follows a power law (e.g. Fig S1) . We then generated a set of networks where we removed all hubs from the original scale-free versions. If a node in the scale-free network contained more than 8 edges, we randomly removed edges from that node, such that it only had 8 edges left in total. We refer to these networks as "Mitigated Hub" networks. When generating 5,000 Mitigated Hub networks, we removed, on average, 4613 edges per network. As a control we also generated a set of networks where we removed that same amount of edges randomly (i.e. not targeting hubs). We refer to these networks as "Mitigated Random" networks. In total we generated 3 sets of networks (Scale-free, Mitigated Hub, and Mitigated Random). We generated 5,000 networks of each set. Network properties, such as transitivity, were also calculated using the igraph package. The nodes of the networks can be either in a susceptible (S), an infected (I) or a removed (R) state. Initially, all nodes were set to be in state S. Then at the first time point, we randomly choose one of the nodes in the network to be in the infected state. This node represents patient zero who first contracts the virus from a non-human source. We then update the model such that at each iterating time point infected nodes can infect their susceptible neighbouring nodes with a given probability p. Infected nodes can remain infected for 2 to 14 days (Lauer et al. 2020) . The length of time that a node is infected is randomly chosen, with equal probability. Once the node is no longer infectious it is set to be in the R state and cannot be infected again. The probability with which each infected node can spread the virus to its susceptible neighbours peaks at around 5 days after contracting the infection. To match the current available data for COVID-19 , we set p = [0.01,0.01,0.1,0.2,0.3,0.3,0.3,0.25,0.2,0.15,0.1,0.05,0.01,0.01], representing the probability of infection on each of the 14 days, respectively. All models are run to completion (i.e. until the infection dies out). Once the infection dies out, all nodes are either in the R or S state. We ran the here described SIR model on each of the generated networks (see Network Generation). We used the Kruskal-Wallis test, as implemented in the scipy package (Version 1.4.1) to test for significant differences between distributions. The code used to generate all of the data presented in this study is available on Github (https://github.com/HAHerrmann/NetworkEpidemics) and Zenodo (DOI: 10.5281/zenodo.3736466). Figure S1 : Example of a degree distribution of a Barabasi and Albert (1991) graph generated using the igraph package (Version 0.8.0) in Python (Version 3.6.9) with a power of 1 and an edge per node connectivity of 2. Nodes with a degree greater than 8 (i.e. nodes with a degree to the right of the red cut-off) are considered hubs and make up less than 10% of the total number of nodes. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not peer-reviewed) The copyright holder for this preprint . https://doi.org/10.1101/2020.04.02.20050468 doi: medRxiv preprint Figure S3 : Duration of peak infection and the day at which the greatest number of nodes are infected when running the SIR model on three different networks (Scale-free: the original network; Mitigated Hub: where edges were removed from the scale-free network such that all nodes have no more than 8 connections; Mitigated Random: where the same number of edges were removed randomly). (A) the average total number of days for which more than 1% of the population is infected. (B) the average number of days after onset of the infection when the greatest number of nodes is infected at the same time. Both (A & B) were calculated from 5,000 model runs; error bars show the respective standard deviation. Only outcomes where herd immunity was achieved were considered. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not peer-reviewed) The copyright holder for this preprint . https://doi.org/10. 1101 /2020
prothrombin time (PT) are commonly observed, the mechanisms supporting the developed of CAC are still elusive [1] . The International Society of Thrombosis and Haemostasis (ISTH) recently published an "interim guidance" aiming to define some characteristics of the CAC [6] . Among others, the ISTH interim guidance highlighted some frequently observed hemostatic alterations in patients with CAC, to include D-dimer usually increased, prothrombin time usually only marginally deranged and platelets count usually within rage or slightly reduced. Those observation are in line with the preliminary observations at the beginning of the pandemic outbreak [6] [7] [8] . Critically, the mentioned hemostatic alternations differ from those usually observed during the disseminated intravascular coagulopathy [9, 10] . While discriminating between a consumption coagulopathy vs. a thrombotic microangiopathy might be challenging, the differentiation is crucial for therapeutic purpose. In order to further investigate the CAC, we tested ADAMTS-13 and von Willebrand factor (vWF) plasma levels in 88 consecutive PCR-proven COVID-19 admitted patients (main characteristics detailed in Table 1 ). Adamts13 activity and von Willebrand antigen (vWF:Ag) measurements were performed using CliA activity assays (HemosIL Acustar ADAMTS13 activity, IL, Lexington, MA, USA). ADAMTS-13 levels were significantly reduced in all COVID-19 patients (CP) when compared to healthy controls (HC) (CP, mean 48.71 ± 18.7%, HC, 108 ± 9.1%; normal value 60-130%). These deranged values are similar to those observed in patients with thrombotic thrombocytopenic purpura (TTP), while ADMTS 13 is not generally reduced during a DIC. Antibodies direct to ADAMTS 13 (assessed by Bethesda assay) were tested in the 25 patients with lowest ADAMTS 13 levels. Both ADAMTS-13 activity and antibodies anti-ADAMTS-13 were tested on the same sample. No patients were found to have significantly increased levels of anti-ADAMTS-13 antibodies (a borderline level was found in 1/25 patient). Overall, in our cohort we observed a mortality rate of 10.2% (9/88). Patients who died had significant lower levels of ADAMTS-13 and higher levels of von Willebrand factor (vWF) when compared to patients with non-fatal outcome (Table 1) . After survival analysis, ADAMTS-13 plasma levels < 30% were significantly associated with a higher mortality (Fig. 1) . Interestingly, as previously reported, also elevated levels of D-Dimer were associated with a fatal outcome [4, 5] . Taken the above together, CAC features seems more in line with a thrombotic, TTP-like microangiopathy (almost normal hemostasis, elevated vWF high and low ADAMTS-13, platelet count slightly reduced) rather than with a DIC (PT and antithrombin levels reduced, reduced fibrinogen, PLTS variably reduced, ADAMTs-13 and vWF not described). These features could be related to the ADAMTS-13 "consumption" due to excess of circulating vWF (thrombotic tendency secondary to a "gain of factor"); the presence of anti-ADAMTS-13 antibodies was excluded in our cohort. In conclusion, high VWF plasma levels associated to low ADAMTS 13 could explain, at least in part, the strong thrombotic tendency in these patients. Our data could have the potential to guide possible new therapeutic options.
The COVID-19 pandemic is presenting a global challenge not just in terms of infectious disease but also for mental health. The pandemic rapidly disseminated across the world with the first reported case in Nepal on January 25, 2020 and the first reported death on May 14, 2020. [1] Nepal responded with a nationwide lockdown from March 24, 2020 . The increasing number of infections and uncertainty induced a substantial fear and concern leading to stress and anxiety which was superimposed by lockdown restrictions, financial breakdown, lack of physical contact with other family members and friends. [2] The consequences of pandemic and lockdown on socioeconomic, mental health, and other aspects of Nepalese society are immense. [3] These alarming conditions may exacerbate the suicidal rate which is already high in our part of the world. Suicide and self-harm (SH) are a serious public health problem; however, it is preventable with timely, evidence-based, and often low-cost interventions. Every year approximately 800 000 people commit suicide and many more attempt it. In 2016, it was listed as the second leading cause of death among 15-29 year-olds worldwide. [4] Suicide is already a key public health concern in Nepal and was ranked 7th by suicide rate globally by 2014. The World Health Organization (WHO) reports an estimated 6,840 suicides annually or 24.9 suicides per 100,000 people in our country. [5] In addition, civil conflict and the 2015 earthquake have had significant contributory effects. [6, 7] Recent studies conducted during the COVID-19 pandemic have revealed high levels of stress, anxiety, and depression in the community. [8] [9] [10] Previous studies conducted after the epidemic outbreak of Severe Acute Respiratory Syndrome (SARS) in 2003 revealed a significant increase All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted October 20, 2020. . https://doi.org/10.1101/2020. 10.16.20213769 doi: medRxiv preprint in the elderly suicidal rate in Hong Kong. [11] Few publications reporting COVID-19 related suicide are published. [12] [13] [14] [15] [16] Increased cases of suicide have been reported in police stations all over Nepal since the lockdown period. [17] However, we found no publications in regards to COVID-19 related suicides presenting to the ED in developing countries. Most cases of SH present to the ED, therefore this study is the first of its kind done in an acute care setting to address this crucial issue related to mental health. The aim of this study is to provide an overview of the impact of the COVID-19 pandemic and lockdown on the prevalence and clinical profile of suicide and SH in our ED. We compared the prevalence and clinical profile of SH in the ED during the COVID lockdown period in Nepal with the matching period of the previous year and the previous 3 months. We hope that all the stakeholders related to mental health will be primed to initiate mental health assessment and screening for the high-risk population and provide intervention for those needed during the period of crisis. All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. This is a cross-sectional observational study of all consecutive fatal and nonfatal SH between Consecutive patients of all ages who presented to the ED during the study periods with any form of fatal or nonfatal SH including attempted hanging, impulsive self-poisoning, and superficial cuttings irrespective of the outcome were included. The patients who were discharged or referred were followed up with a telephone call to enquire about the patient's final outcome. Incomplete data were excluded. The search in the electronic medical record (EMR) system with keywords suicide, attempted suicide, poisoning, hanging, self-harm, self-injury, overdose was conducted and the data was collected in the predesigned form. The final outcome was recorded combining the hospital All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted October 20, 2020. . https://doi.org/10.1101/2020.10.16.20213769 doi: medRxiv preprint records or phone calls if the case was referred, left against medical advice (LAMA), and discharged. The data from the EMR and phone calls were collected in a password secured laptop in the excel sheet. Variables studied include patient demographics (age, gender, address), mode of transportation, triage details, time to presentation in the ED, previous attempts of SH, past psychiatric illness, comorbidities, vital signs at presentation, investigations, treatment offered in the ED, duration of stay, and disposition of the patients. Disposition of the patients was divided into disposition from the ED and from the hospital. ED disposition was categorized into admission, discharge, LAMA, refer and mortality in the ED. Hospital disposition was categorized into discharge, LAMA, refer, and mortality in the hospital. The final outcome was categorized into recovery or mortality after follow-up phone calls and EMR review. Data was analyzed with SPSS version 21. The categorical variables were expressed as frequency/proportion and continuous ones with mean with standard (SD) deviation or median with interquartile range (IQR) as appropriate. Categorical variables were compared with the chisquare test. The Independent-samples t-test/ANOVA or Man Whitney U test/Kruskal-Wallis test was used to compare the continuous variables with the categorical variables. P-value of less than 0.05 was considered significant. All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted October 20, 2020. . https://doi.org/10.1101/2020.10.16.20213769 doi: medRxiv preprint A total of 125 suicide/SH cases presented to the ED during the total study period, 55 during period 1 (44%), 38 during period 2 (30.4%) and, 32 during period 3 (25.6%) (Figure 1 ). The total number of patients presenting to the ED in all periods had decreased by 53% and 55.4% during the lockdown period when compared to previous periods (period 1:2085 versus period 2:3926 and period 3:3769 respectively). The cases of suicide and SH constituted 55 (2.6%), 38 (0.97%), and 32 (0.85%) among the total ED cases during periods 1, 2, and 3 respectively. Comparing the three periods, the cases of SH in period one increased by 44% (1.45 times) and 71.9 % (1.72 times) in relation to period two and three respectively. All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted October 20, 2020. . https://doi.org/10.1101/2020. 10.16.20213769 doi: medRxiv preprint The comparison of different variables in relation to the three periods is depicted in table 1. *ANOVA, †Independent-samples Kruskal Wallis test ‡those who couldn't be followed up were excluded, n=99 All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. with an increase in the hospital admission rate and LAMA and a decrease in referrals. Similarly, the overall hospital outcome was also significantly different (p-value=.001), with increase inhospital mortality (18.2% vs 2.6 % and 3.1%), and LAMA (18.2% vs 2.6% and 3.1%). Ninety-All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted October 20, 2020. . https://doi.org/10.1101/2020. 10.16.20213769 doi: medRxiv preprint nine cases (79%) responded to a follow-up call which showed no statistical difference (p-value=0.422) in the overall mortality in the three periods (28.3%, 14.8%, and 23.1%). Despite a remarkable reduction in overall ED visits, our study showed a disproportionate increase in cases of suicide/SH during the lockdown period in comparison to the matching periods in the previous year and prior to the lockdown. OP poisoning was the most common mode of suicidal attempt during all periods. There was a delay in time to arrive at the hospital during the lockdown period with increased in-hospital mortality. Suicide is a preventable loss that affects families, communities and entire countries. There is some evidence that deaths by suicide increased in Hong Kong during the 2003 SARS All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted October 20, 2020. . https://doi.org/10.1101/2020. 10.16.20213769 doi: medRxiv preprint epidemic. [11] The WHO has predicted the rise in the number of mental health problems due to the global pandemic and has addressed this issue through various messages and publications related to mental health awareness and prevention. [18] . A recent report in China during COVID-19 revealed that about a third of their sample reported moderate to severe anxiety and 53 % of the respondents rate the overall psychological impact of the COVID-19 outbreak to be moderate to severe. [9] Strong restrictive measures to avoid COVID-19 infection have led to loneliness, loss of job, and loss of access to health which may precipitate or worsen the existing mental health problem. The lockdown has created a sudden economic recession, unemployment, worsened poverty which might have led individuals to contemplate suicide. Moreover, patients suffering from mental illnesses are unable to access health-care services. The effects might be worse in resource-limited countries like ours, where poor economic status is compounded by inadequate welfare support. Our study shows a considerable rise in the number of suicidal cases since the lockdown period. Various case reports of suicide-related to COVID-19 have been reported worldwide. [19] Studies from our neighboring countries, China, [12] India, [15] Pakistan, [13] and Bangladesh [14] have also raised concerns on increased suicide rate related to COVID-19. In contrast, a study exploring the mental health presentations in the ED before and during the COVID-19 outbreak in developed world showed decreased suicide and SH. [20, 21] Gunnel et al have categorized COVID-19 related suicide risk factors into financial stressors, domestic violence, alcohol consumption, isolation, access to means, and irresponsible media reporting and published a public health response model to mitigate these risks. [22]) In our study, the common causes of suicidal attempts were disputes with the family members and economic crisis. No cases directly related to COVID-19 related illness or death were found. All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted October 20, 2020. . https://doi.org/10.1101/2020. 10.16.20213769 doi: medRxiv preprint Suicide is reported as the second leading cause of death among 15-29-year-olds globally. [4] Previous publications had reported higher suicide and SH among women, younger age group, migrant workers, the marginalized and disaster-affected population in Nepal. [5] After the 2003 SARS epidemics the increased suicidal rate among the elderly was reported in Hong Kong. [11] In our study, there was no statistically significant difference in the age of the patients attempting suicide in different periods. The mean age was 32 years and females attempted more suicide/SH in all the three periods. SH includes a variety of behaviors like hanging, self-poisoning, cutting, jumping from heights in response to intolerable mental pressure. OPs are the most commonly used form of pesticide in Nepal. [23] A previous study done at our ED showed that organophosphate poisoning was the commonest form of poisoning. [24] . Our study also showed that OP poisoning was the common cause of attempted suicide. The lockdown, travel restrictions and social distancing likely contributed to a significant reduction in the use of private transport for transferal of patients to the ED. This may have caused the delay in presentation to the ED during the lockdown period in our sample. Our study also shows that the proportion of referrals of suicide/SH cases was less from our hospital during the lockdown period. During the lockdown period, the number of admissions for other illnesses requiring intensive care was low; therefore, more beds were vacant preventing referrals to other centers. This may be the reason for increased overall in-hospital mortality for suicide and SH during the lockdown period or they might have used lethal means to contemplate suicide. The study was conducted in one of a rural tertiary care center, which is not representative of the whole country's situation. Moreover, it doesn't reflect the overall burden of mental health problems in the community and all population groups. An in-depth study of the cases was not done to determine the root cause of the increased suicidal attempts. All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. We found an increase in the number of patients presenting with suicide and SH in our ED during the pandemic which is likely to reflect an increased prevalence of mental illness in the community. To cope with the effects of the COVID-19 pandemic is emotionally challenging, especially for vulnerable individuals with underlying mental illness, low socio-economic status. Mental health problems are considered as a social stigma in our part of the world; therefore, people may be reluctant to share their feelings. Stressors like the increasing number of cases and deaths due to COVID-19, prolonged social isolation due to lockdown and social or physical distancing, economic regression and limited access to health care services due to fear of contracting COVID-19 may cause more panic, anxiety, and depression among the general public. The interplay of these factors in turn can precipitate suicide and SH in the future. Therefore, timely interventions to promote and protect the mental health of people and strategies to prevent suicide is of utmost importance. [25] Efforts to prevent the COVID-19 spread should be extended to raise awareness about dealing with mental health issues, recognizing warning signs of suicide and providing support to those needed. All the stakeholders, including policymakers, psychiatrists, psychologists, and other healthcare professionals should collaborate to raise awareness to screen, detect and timely intervene the needy patients. The challenge of the COVID-19 crisis might be an opportunity to advance the suicide prevention efforts in our country and thus to save many precious lives. All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted October 20, 2020. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted October 20, 2020. . https://doi.org/10.1101/2020.10.16.20213769 doi: medRxiv preprint
Since December 2019, a number of unexplained pneumonia cases have been discovered by some hospitals in Wuhan, Hubei Province, China, and have been confirmed as acute respiratory infections caused by coronavirus disease in 2019 [1]. On February 11, 2020, the World Health Organization (WHO) named a coronavirus disease 2019 as 'COVID-19.' So far, there have been 2,335,180 patients with COVID-19 worldwide, and 161,491 have died (latest data as of writing time April 20th 17:05, 2020 [2]). The case data of the world top six most affected countries on April 2, 2020, and April 20, 2020, are shown in Table 1 . The COVID-19 has spread globally, seriously endangering people's lives and health, and leading to a huge impact on the economy, education and daily life of various countries. At present, only some provinces in China have resumed physical teaching for students in junior and high school graduation years, while students from colleges, universities, and elementary schools across the country are taking online courses. In the last few days, the Ministry of Education of China informed that the national unified examination for admissions to general universities and colleges is delayed by one month [3] . The International Olympic Committee officially announced in a statement that the Tokyo 2020 Olympics will be celebrated from July 23 to August 8, 2021 [4] . During the spread of epidemic, national medical staff, scientific research teams, and epidemiological research experts are all struggling to contribute to the fight against the epidemic and have achieved very important results in China. Many important treatment methods and theoretical results will provide valuable experience for the world. The team led by Academician Zhong Nanshan pointed out in the medical paper [5] that the median incubation period of the new coronavirus is 3 days, with a minimum of 0 days and a maximum of 24 days. On February 11, 2020, Yang et al. took 8866 patients as a research sample and reached an important conclusion that the estimated basic reproductive number R 0 = 3.77 [6] . Another estimate of R 0 = 2.7 was also based on surveillance data, but the methodology was different from [7] . The basic reproductive number is a mathematical concept of epidemic dynamics, which represents the number of people infected by a patient during their average illness period. The team of Professor Xiao of Xi'an Jiaotong University put forward a concept of mean control repro-ductive number [8] . This concept is novel and provides a new theoretical analysis method for the later stage of epidemic dynamics. Since the article was published early in the epidemic and the amount of data is not comprehensive, the R 0 calculation was not recognized by the academic community. However, the epidemic model proposed by epidemiologists can characterize the transmission mechanism of epidemic and predict the final scale of the disease, and then provide valuable guidance for the overall epidemic prevention work of our society. More relevant research results can also be found in the latest literature [9, 10] . Now we are in an era of the Internet with information explosion. Information is speeding quickly and in various forms. Currently, popular and active social software worldwide includes WhatsApp, Facebook Messenger, Network platform, Instagram, QQ, Twitter, Line, etc. There are hundreds of millions or even billions of registered users of these social software. For example, the registered number of WhatsApp has exceeded 1.5 billion [11]). As a platform for free speech, the Internet has also become a means of disseminating many online rumors and even affected the stability of society. In particular, during the epidemic, the spread of some unconfirmed news caused social panic and disrupted normal life order. For example, on January 31, some media reported that the Chinese patent medicine Shuanghuanglian oral solution can inhibit the new coronavirus [2]. After the news was released, some Shuanghuanglianrelated products in some domestic pharmacies were out of stock, causing market confusion. On March 9, the news of 'Universities and middle schools in Beijing will open on April 6 and elementary schools and kindergartens will open on April 20' was circulated among multiple network platform groups and friends. However, on the evening of March 11, the Beijing Municipal Education Commission made it clear that this is a false message [12] . In addition, some people spread rumors on the Internet that the price of daily necessities will increase sharply, causing some citizens to snap up living supplies, thus causing market chaos. Such events are endless. The spread of rumors disturbed people's sight and caused extremely bad influence. Combined with the information dissemination properties of the network platform, the number of friends of a user is an important factor affecting the spread of rumors. The number of user friends determines the user's ability to spread information (rumors). This is based on a reasonable hypothesis that the number of friends of a user is mainly determined by the length of registration. Therefore, longer registration time forms larger friend cycle and further creates greater rumor impact. If the user's registration time is abstracted as 'age,' the spreading mechanism of the rumor on the network platform can be characterized by the SIR model with the 'age' structure. Considering the above analysis, for the rumor propagation features of the network platform, we establish a SIR model with 'age' structure of spread rumors. The ignorants are divided into those with higher education level and lower education level, and short-term online education is carried out. The main contributions of this paper are summarized as follows (1) Considering that different users have different registration times which leads to different abilities to spread rumors, we denote 'age' as the user's registration time as an important variable factor. Unlike previous models, the distance between the geographic location of a user and the rumor source is taken as an important factor [13] . (2) Considering that the education level of the individual has an important influence on the spread of rumors, the users are divided into two categories according to their educational level. Compared with treating all users as ignorant [14] , it can describe the spreading rules of rumors more reasonably. (3) The improvement in education level requires a long period. Our model first introduces short-term online education and quantitatively analyzes its inhibitory effect on the spread of rumors. This will be an important means to effectively control the spread of rumors. In this paper, we will propose a novel epidemiclike model to quantitatively describe the law of rumor spread and then consider education as a control measure of the spread of rumors. The remainder of this paper is organized as follows. In Sect. 2, the latest achievements of rumor propagation research are reviewed. The establishment of rumor propagation model is described in Sect. 3. In Sect. 4, we analyze dynamic behavior of the model. In Sect. 5, due to the spread of rumors, the importance of education is summarized and some suggestions are given. The numerical simulations are introduced in Sect. 6. Some brief conclusions are presented in Sect. 7. Rumors refer to remarks that have no corresponding factual basis but are fabricated and promoted through certain means [15, 16] . In the past few years, we have seen many examples of rumors that have a huge negative impact, and these rumors are often accompanied by major accidents. As an example, in April 2013, many rumors about the H7N9 epidemic were spread throughout China, which triggered the great panic among the masses and even posed a threat to social stability [17] . With the rapid development of information technology, rumor propagation is mainly carried by social media rather than traditional word of mouth. Network platform is the most popular social media platform in China with the largest number of users. In the second quarter of 2019, the monthly active users of WeChat platform had reached 1.13 billion [18] . Although the emergence of network platform allows people to communicate easily, it also makes the rumors spread uncontrollably in the circle of friends. Therefore, it is extremely meaningful to study the mechanism of rumor propagation on the network platform and propose an approach to effectively suppress the spread of rumors. The spread of rumors in social networks has attracted widespread attention from scholars at home and abroad [19, 20] . In particular, a series of rumor spreading models based on the SIR dynamics model of infectious disease transmission have achieved fruitful results in theoretical research. In 1960, the DK model, the earliest rumor spreading model, was developed by Daley and Kendal [21] . In this model, the population is divided into three categories, namely Ignorant, Spreader, and Stifler, corresponding to susceptible, infected, and removed people in the infectious disease model. Afterward, Maki et al. [22] proposed an additional hypothesis to improve the DK model and formed the MT model. Because the DK and MT models are only applicable to small-scale social networks and do not take into account the topological properties of the networks, these classical propagation models are not sufficient to adapt to complex social interaction systems. In small-world networks, Zanette [23] first combined the complex network with the rumor propagation model to analyze the dynamic behavior of rumor spreading. Furthermore, the author considered the topological properties of underlying network and put forward a useful stochastic approach on a scalefree network [24] . On the basis of complex networks, a series of studies on different mechanisms in the spreading of rumors have received sustained attention. Nekovee et al. [25] introduced the forgetting mechanism into the SIR rumor spreading model, arguing that rumor spreaders might forget the rumors and turn them into immune. In this model, the forgetting rate is considered as a constant, but Zhao et al. [26] proposed that the forgetting rate is a function that changes with time. The trust mechanism established between the ignorant nodes and the spreader nodes was proposed by [27] . Combining the research results of the above literature, the stifling rate, the forgetting rate and the trust rate are three crucial factors that affect the spread of rumors. However, there is very little literature regarding the impact of the educational level of ignorant on the spread of rumors. In practice, the education level of ignorant is the most significant factor affecting their ability to judge rumor. Particularly, based on the SIR model, Afassinou [28] introduced a SEIR model applying a forgetting mechanism and a factor of the educational level of the population. His research demonstrated that increasing the educational level of the population promotes the termination of rumor communication. Nevertheless, unfortunately, the education level of the ignorant with lower education cannot be improved in a short period of time, but to prevent the spread of rumors, it is necessary to take measures for the ignorant of lower education, such as short-term online education. On the other hand, the rumor spreading models mentioned above are based on the classical SIR model of ordinary differential equations (ODE), which considers that the number of the ignorants, the spreaders and the stiflers are only a function of time t. In the new media era, the study of rumor spreading models needs to combine the characteristics of social software to disseminate information. Zhu et al. [29] pointed out that with the rapid development of mobile communication devices, the traditional ODE-based rumor propagation model may not be suitable for describing rumor spread in online social networks. According to partial differential equation (PDE), Wang et al. [30] further proposed a linear diffusive model to understand the diffusion process of information in both temporal and spatial dimensions. Zhu et al. [31] investigated a PDE mathematical model that considers a delayed feedback controller, which effectively controls the diffusion of adverse information in online social networks. Zhu et al. [32] proposed a delayed reaction-diffusion rumor propagation model based on partial differential equations, and they obtained the local stability of equilibrium point and the condition of Hopf bifurcation. The exact solutions and nonlinear dynamics are of importance for the study of partial differential equations [33] [34] [35] [36] [37] . Some results have been derived recently for kinds of nonlinear partial differential equations [38] [39] [40] [41] [42] . On the basis of the model established by partial differential equations, scholars believe that rumor spread is not only related to time but also to space [43, 44] . Thus, it is more reasonable to study the rumor spread mechanism from both temporal and spatial dimensions than the ODE-based model. For spatiotemporal dynamics of rumor propagation, we hold different views from the literature [29] [30] [31] [32] 43, 44] . We believe that in the instant messaging network platform, the distance from the information source is irrelevant. Instead, the ability of platform users to spread information is more important, such as the number of friends a user has. In the next section of the model establishment process, we will elaborate on our views. This paper uses the network platform as a research carrier to study rumor spreading mechanism. We assume that N (t) is the number of network platform users at B A Fig. 1 The influence weight of users time t. For two users A and B, the number of friends they have has a distinct influence on the spread of rumor spreading, as illustrated in Fig. 1 . The arrow indicates the direction in which the information can be transmitted. Any user can be either a communicator of the message or a receiver of the message. Users who are connected in a straight line are usually not online for a long time and do not participate in any interaction on the platform which is referred to as inactive users. Obviously, the impact of A users posting a message is greater than that of B users. We abstract the ability of a user to distribute messages as influence weights, which are usually related to the number of friends they have (especially active users). If the variable y represents the influence weight of the user, the total number of users can be regarded as a bivariate distribution function for time t and weight y. Usually, users with long registration time have more friends, and the spreading influence is greater. Let a be the duration of the network platform user's registration, we define a as the age of the user. In this paper, we always have the following assumption Assumption 1 The influence weight of network platform users is positively related to the duration of the network platform user's registration. That is, y = ka, where M is a network platform user expiration date (maximum age). Based on Assumption 1, the total number of N can be expressed as the bivariate distribution function N (a, t) with age structure a at time t. Assume that the number of users is a constant, there is no new user registration, and there are no inactive users (or logout). Within the time interval [t, t + dt], it is obvious that the equation dt = da is satisfied; that is, the increment of time is equal to the increment of age. At time t + dt, the number N (a, t + dt) da of users with age in interval [a, a + da] should be equal to the number N (a − dt, t) da of at the time t with age in interval [a − dt, a + da − dt]. Then, we have Therefore, applying the Taylor formula of the binary function to the above, the following equation can be derived In fact, a social platform has new users registered all the time, and some users leave or become inactive users. A similar set of first-order nonlinear partial differential equations has been used to model population dynamics of sickle cell anemia, a genetically inherited disease [45] . In the next subsection, we discuss the impact of these two scenarios on the total number of users. Assuming the registration rate of the network platform is α(a), the total number of users registered in the time On the other hand, the total number of users registered in the time interval [t, t + dt] should be equal to the total number of users with age in interval [0, dt]; then, we have The above formula gives the calculation method of the number of new users registered to the network platform, which is equivalent to the input item. With the continuous input of new users, some users never participate in the dissemination of information and become inactive users. Such a user needs to be regarded as an invalid user and cannot be counted. Assume that μ(a − da) is the probability that a user with age in the interval [a − dt, a] becomes an inactive user per unit time. In the time interval [t, t + dt], the number of users whose age increases from [a − dt, a] where we assume that the inactive network platform users have no effect on the entire information circle and are not counted in the total number of users. Using the Taylor formula to expand the two ends of the above equation and discard the high-order terms, we obtain Noticing dt = da and omitting the term of (dt) 2 , we get the following partial differential equation The initial conditions corresponding to above equation are where N 0 (a) represents the distribution density of the initial user. Synthesizing (1), (2), and (3), a partial differential equation (PDE) model of network platform users development is obtained as follows We subdivide the network platform users into three different classes. And let S(a, t), I (a, t) and R(a, t) be the age densities of, respectively, the ignorant users, spreading users and immune users at time t. Furthermore, we divide the susceptible users into highereducators class S h (a, t) and lower-educators class S l (a, t). Obviously, the relationship between the number of different classes of users and the total number of users is formulated as follows The main process of rumor spreading on network platform is shown in Fig. 2 . The force of spreading for the spreaders rumor denoted by λ h (a) and λ l (a) is defined by [46] [47] [48] where σ (a) is age-specific spread rate, k h (a) and k l (a) are age-specific contact rates that spreaders contact higher-educator users and lower-educator users, respectively. The others major parameters are given in Table 2 . All parameters are evaluated in the interval [0, 1], The probability that higher-educator individual becomes a immune β l (a) The probability that lower-educator individual becomes a immune γ l (a) The probability that lower-educator individual becomes a immune through online short-term education δ(a) The probability that a rumor spreader will give up the spread of rumors becomes a immune (see [43] ) μ(a) The probability that a user becomes a inactive user γ l (a) ≤ 1, λ l (a) ≥ λ h (a) and β l (a) ≤ β h (a). When an ignorant user contacts a spreader, who will become a spreader by listening to rumors or become an immune user by being able to recognize the truth. Of course, education is the key factor that determines whether an ignorant user can recognize rumors. Based on this inspiration, we introduce a short-term network education control mechanism γ l (a) for the ignorant users with low educational level. There are two factors that either the rumor becomes outdated or the spreader loses interest in the rumor, which causes the rumor spreader to become an immune. Under the above assumptions, the spread of the rumor can be described by the following partial differential equations We assume that the total number of users of the network platform is an equilibrium, and the number of newly registered users per unit time is equal to the number of inactive users. This means that We introduce four new variables as follows It is easy to obtain that s h (a, t) + s l (a, t) + i(a, t) + r (a, t) = 1 and To facilitate the study, we normalized model (6) and converted it into model (7) . The two models are equivalent. In the next section, we will do a comprehensive analysis of the dynamic behavior of model (7). To study the dynamics of the model, the global existence of the solution should be ensured firstly. This is also a necessary condition to verify the rationality of our proposed model. In the next subsection, we will prove the existence and uniqueness of solution. Consider the initial boundary value problem of model (6) as an abstract Cauchy problem on the Banach space X := L 1 (0, M; C 2 ) that is the set of equivalence classes of Lebesgue integrable functions from [0, m] to C 2 equipped with the L 1 -norm. Let A be a linear operator defined by A : and a nonlinear operator defined by F : X → X with norm Then, model (6) can be rewritten to an abstract Cauchy problem as follows where u(t) = (s h (a, t), s l (a, t), i((a, t), r (a, t) ). The initial boundary value problem of model (6) has a unique nonnegative classical solution on X with respect to (s h (a, 0), s l (a, 0), i(a, 0), r (a, 0) ) Proof It is easily obtained that the operator A is the infinitesimal generator of C 0 -semigroup T (t), t ≥ 0, and F is continuously Frechet differentiable on X . Then, for each u 0 ∈ X , there exist a maximal interval of existence [0, m) and a unique continuous differential mild solution t → u(t, u 0 ) (see [47] ) from [0, m) such that where either m = +∞ or m < +∞ and lim t→m u(t, u 0 ) = ∞. Since N (a, t) = N (a) < +∞, we can obtain m = +∞. The proof is completed. Rewriting model (6) as an abstract Cauchy equation in vector form, we obtained that model (6) admits a unique nonnegative classical solution by using the C 0semigroup theory. Since model (6) and model (7) are equivalent, model (7) also has a unique nonnegative classical solution. The equilibrium point of the autonomous model, which indicates a state where the system model reaches dynamic equilibrium, has an important physical significance. This equilibrium is usually the level of balance we expect the system to eventually reach. Similarly, there are steady-state solutions that are independent of time variables for nonautonomous models. In particular, general models also have marginal equilibrium state from the perspective of mathematics. However, such a marginal equilibrium has no practical significance. For model (7) , our goal is to reduce the amount of i(a, t). Of course, i(a, t) = 0 is also unrealistic. For a network platform with nearly one billion users, we can only control and reduce the spread of rumors, which is difficult to completely ban. Therefore, we only study the existence of the positive steady-state solution of model (7). Theorem 2 If the basic reproductive number of rumor R 0 > 1, then there exists a steady-state solution X * = (s * h (a), s * l (a), i * (a), r * (a)) of model (7), where R 0 = H (0), and , r * (a)) as a timeindependent solution of model (7) that satisfies where and Substituting (10) into (11) and changing the order of integration, we have From the first and second equations of (9), we get For any V * > 0, substituting V * into (13), there is a unique s * h (a) and s * l (a) corresponding to it. Substituting s * h (a) and s * l (a) into (12) can solve r * (a). For the same reason, substituting s * h (a) and s * l (a) into (10) can solve i * (a). That is to say, each V * corresponds to the unique equilibrium state (s * h (a), s * l (a), i * (a), r * (a)). Next, we discuss the existence of V * . Substituting i * (a) into +∞ 0 σ (a)N (a)i * (a) da =: V * , and both sides divide by V * , we have We define R 0 = H (0). It is easy to see that the existence of a steady-state solution in model (7) which is equivalent to equation (14) has a positive solution. We can claim that i * (a) < 1 because s * h (a) + s * l (a) + i * (a) + r * (a) = 1. Therefore, we get where σ max = max{sup [0,+∞) σ (a)}. Obviously, we get that H (σ max N ) < 1 when the V * = σ max N . As can be seen from expressions (14) and (13), we obtain that H (V * ) is monotonically decreasing with respect to V * . In view of the theorem conditions R 0 = H (0) > 1, then from the properties of continuous functions, we have Eq. (14) that has a unique positive solutionV * in (0, σ max N ). The proof is completed. In this section, we focus on the stability of the steadystate solution of model (7) . ŝ h (a, t),ŝ l (a, t),î(a, t) , r (a, t) andV are linear perturbations of the steady-state solutions s * h (a), s * l (a), i * (a), r * (a) and V * , respectively. It is easy to see that the stability of the steadystate solution is equivalent to the linear disturbance tending to zero. We assume that the perturbations of model (7) have the following exponential solution Theorem 3 If the condition f (ξ ) ≥ 0 is satisfied, then the steady-state solution of model (7) is locally asymptotically stable, where Proof From (15), we can get the following approxi- Note that the intensities of the perturbation can take on either positive or negative. For the convenience of analysis, we make a variable substitution as Then, we can obtain the following system as By solving the third equation and fourth equation of model (16) , directly we obtain and Obviously, s h (a) and s l (a) are negative. Precisely, we assume that (17) In view of the condition of Theorem 3, we get that Q(λ) ≥ 0 and Q(λ) is decreasing function of λ and Q(λ) → 0 as λ → +∞. It is easy to see that the first integral above is equal to one, and the second integral and the third integral are negative terms. Therefore, we obtain that Q(0) < 1. Then, in view of monotonicity of Q(λ), we get that equation Q(λ) = 1 has a unique real solution which is negative and all complex solutions have real parts smaller than the unique real solution. Therefore, the steady-state solution of model (7) is locally asymptotically stable by the Lyapunov stability theory. The proof is completed. In this section, we mainly focus on the impact of educational factors on the spread of rumors. The educational factors mainly refer to the education level of the ignorant and the short-term education of the network. First, the ignorant users are divided into higher educators and lower educators to study the rumor spreading mechanism, without considering the short-term network education factor, namely γ l = 0. In fact, this is based on the theory that the higher the level of education is, the stronger the ability of users to identify rumors is. Therefore, short-term online education is provided for ignorant people at the bottom of education to improve their ability to recognize rumors. Finally, we give reasonable suggestions and feasible measures for our analysis results. According to the rumor spreading mechanism established by model (7), we define the final size of rumor spreading to include the following two factors (1) Setting the scale of the rumor effect M = +∞ 0 i * (a) da, where M represents the total number of rumor harms in the whole process of rumor spreading to the equilibrium state. (2) Network final health status is r * (a), where r * (a) represents the proportion of users who have the ability to identify rumors when they reach equilibrium. It can represent the ability of the network to ultimately resist rumors. Obviously, the smaller the M is, the less harmful it is to be affected by rumors before the equilibrium state. The greater the r * (a) is, the stronger the ability of the network to resist rumors is. We define K = s h (0,t) s h (0,t)+s l (0,t) to represent users with higher education proportion of total users in initial moment, which also represents the overall level of education of the entire user community. In this case, we also assume that short-term network education factor γ l = 0. In particular, we assume that all users in the initial state have not received higher education, namely. Then, the model can be changed to the following form (13), we can achieve that M decreases as K increases, and r * (a) increases as K increases. Case 1 Positive effects of college students during the spread of COVID- 19 The outbreak began during the winter vacation of 2019, and Chinese universities have not yet announced the start of classes. This causes all the college students and graduate students to stay at home. These senior intellectuals are distributed in every family in the community, which has significantly improved the education level of social groups. (In fact, when students are concentrated on campus, they do not participate in social activities, which is equivalent to an invalid user.) That is, the K = s h (0,t) s h (0,t)+s l (0,t) of model (22) becomes larger. From the above theoretical analysis, it can be concluded that the total number of rumor harms Q = +∞ 0 i * (a) da will decrease and the proportion of users with the ability to identify rumors r * (a) will increase. This is true. During this outbreak of COVID-19, senior intellectuals have a strong ability to identify rumors, and they can spread scientific epidemic prevention knowledge and related information in the community in time. These measures have inhibited the spread of rumors and passed positive energy for social stability. In short, the theoretical results show that the level of education directly affects the final size of rumor spreading. The more users with higher education in the network are, the smaller the influence of rumors is, and the stronger the ability of the network to resist rumors is. Through the above analysis, we know that improving the user's education level is an effective means to control the final size spread of rumors. However, the overall education level of a network group is basically maintained in a short period of time. Therefore, it is highly necessary to carry out network short-term education for users with low education level. Based on the conclusions of the previous subsection, we can conclude that M decreases as γ l (a) increases, and r * (a) increases as γ l (a) increases. In this paper, we mainly considered two important factors affecting the rumor spreading education level K and short-term online education γ l (a). Education level K is uncontrollable in the short term. However, the short-term network education γ l (a) is an effective way to control the rumor spreading. To this end, we give the following suggestions (1) For the rumors that appear in social networks and further ferment, the media should promptly correct, disclose the false information of rumors and report the negative impact and social harm that rumors spread. The media needs to take advantage of its platform's speed and credibility to expand the scope of the report, so as to awaken more rumors and stop the spread of rumors. Case 2 Positive effects of credible experts and official media during the spread of COVID- 19 Human beings need a cognitive process when facing sudden disasters. So far, we have not figured out the source of the new coronavirus pneumonia. There are many speculations about the etiology of pneumonia in the early days. Academician Zhong Nanshan, an authoritative expert, clarified at the press conference that the outbreak of COVID-19 occurred in Wuhan, but there is no evidence that the source is also in Wuhan. This is a scientific question. It is irresponsible to make conclusions casually before you figure it out. This eliminates the doubts of the masses. Some rumors disappeared as a result. The three control suggestions mentioned above can be attributed to short-term online education γ l (a). In view of model (7), we note that γ l (a)s l (a, t) is reduced from s l (a, t) to r (a, t); that is, through short-term online education, some people with lower education level can recognize rumors and become rumors immune. This is consistent with our theoretical analysis conclusion (12): r * (a) increases as γ l (a) increases. For COVID-19 such a fierce emer-gency, it is of great significance for the official media, government departments and authoritative experts in the field of doctors to publish reliable information in a timely manner, which can effectively control the spread of some rumors and reduce the anxiety of the masses. In this section, we simulate and analyze the dynamic characteristics of the proposed rumor propagation model through simulations with MATLAB as well as the influence of education level and short-term online education factors on the spread of rumors. The dynamic characteristics of model (7) To show the dynamic characteristics model (7), the basic parameters of the model we selected are as follows Theorem 1 proves the existence of nonnegative classical solution of model (6) and model (7) . The dynamic characteristics of i(a, t) and r (a, t) of model (7) are shown in Fig. 3 . Note that the maximum value of i(a, t) is close to 0.2, which means that at some point 20% of the participants are involved in the spread of rumors. In particular, in a critical period such as the outbreak of COVID-19, the harm caused by rumors to society is huge. In this example, we assume that the short-term online education strength γ l (a) = 0 by using the control variable method. The curve family of i(a) (or r (t)) at different times t is depicted as (a) of Fig. 4 (or Fig. 5 ). For each curve (a) of Fig. 4 (or Fig. 5) , it means the change of i(a) (or r (t)) at a time t with age a. We can see that the variable a has an important influence on i (or r ) at different times. For a fixed a, the variation law of i(t) (or r (t)) with time t is shown in (b) of Fig. 4 (or Fig. 5) . To visualize the numerical results, the fixed age a = 6, then i(a, t) (or r (a, t) ) is simplified to a It is easy to calculate that K 1 (t) ≈ 10%, K 2 (t) ≈ 20%, K 1 (t) ≈ 30%, K 4 (t) ≈ 40% and K 1 (t) < K 2 (t) < K 3 (t) < K 4 (t). In the (c) of Fig. 4 (or (c) of Fig. 5 ), we use red, pink, blue and black lines to represent i(t) (or r (t)) change trend with time which with the initial value conditions are K 1 (t), K 2 (t), K 3 (t) and K 4 (t), respectively. Obviously, as the education level K (t) to increase, i(t) gradually decreases and r (t) gradually increases, which is consistent with theoretical analysis results of Sect. 5.1. In other words, as the number of higher education users continues increase, the spread of rumors M will decrease. According to the data released by the Ministry of Education of China, as of 2018, the number of people who have obtained college diplomas or above accounted for 13% of the total population of the country [3]. Therefore, our selection of K 1 (t) is rea- sonable. With the continuous improvement of China's education level, this data will definitely increase. The innovation of this paper is that we added the control factor of online short-term education gamma l (a) to the model. Similar to the previous two Examples, the fixed age a = 6 and K = K 1 (t). We choose four different γ l (a) as follows In Fig. 6 , we use red, pink, blue, and black lines to represent i(t) (or t (t)) change trend with time in different short-term online education control factors K 1 (t), K 2 (t), K 3 (t) and K 4 (t), respectively. Obviously, as the short-term online education control factors K (t) increase, i(t) gradually decreases and r (t) gradually increases, which are shown in a and b of Fig. 6 . This is consistent with theoretical analysis results of Sect. 5.2. In fact, as we suggested in Sect. 5.2, the short-term online education is the fastest and most effective way to control rumors. For example, during the outbreak of COVID-19, the government released official news in a timely manner, and the official news media updated the data in a timely manner, which played an important role in stabilizing public sentiment and resisting the spread of rumors. In this paper, we have established a rumor propagation model based on the structure of nonautonomous partial differential equations. Combined with the law of users' dissemination of information, three important factors are considered: the user's education level, registration time, and short-term online education. The existence and uniqueness of the positive solution of model (7) are obtained by using C 0 -semigroup theory. If the basic reproductive number of rumor R 0 > 1, then model (7) admits a steady-state solution. Furthermore, the stability of local asymptotically of steady-state solution is obtained. For an online platform with a large number of users, it is only an ideal state to completely eliminate the spread of rumors. Therefore, we did not consider the marginal equilibrium of model (7), which is meaningless. By analyzing the main influencing factors of i(a, t) in the steady-state solution, it is an effective means to reduce its number scale. The interesting conclusion of this article is that improving education level and conducting short-term online education are important strategies for effectively controlling the spread of rumors. The numerical simulation verified the correctness and feasibility of the theoretical results. Since the outbreak of COVID-19, it has brought huge disasters to all countries in the world, including the direct impact of people's lives and health, economic development, transportation, education, employment and other aspects. In addition, the spread of the epidemic has also brought many indirect impacts on people, such as people's panic during the epidemic. Psychological research shows that when people face sudden crises, they will have different degrees of anxiety, nervousness and other excessive reactions. These overreactions can damage the immune system and cause physical and mental illness. This article mainly focuses on the laws and control of the spread of related rumors during the epidemic. Quantitative analysis and evaluation guide us to effectively control the spread of rumors and alleviate the negative psychological impact of COVID-19.
Back in 1928, Alexander Fleming 1 began the microbial drug era when he discovered in a Petri dish seeded with Staphylococcus aureus that a compound produced by a mold killed the bacteria. The mold, identified as Penicillium notatum, produced an active agent that was named penicillin. Later, penicillin was isolated as a yellow powder and used as a potent antibacterial compound during World War II. By using Fleming's method, other naturally occurring substances, such as chloramphenicol and streptomycin, were isolated. Naturally occurring antibiotics are produced by fermentation, an old technique that can be traced back almost 8000 years, initially for beverages and food production. Beer is one of the world's oldest beverages, produced from barley by fermentation, possibly dating back to the sixth millennium BC and recorded in the written history of ancient Egypt and Mesopotamia. Another old fermentation, used to initiate the koji process, was that of rice by Aspergillus oryzae. During the past 4000 years, Penicillium roqueforti has been utilized for cheese production, and for the past 3000 years soy sauce in Asia and bread in Egypt has represented examples of traditional fermentations. 2 Natural products with industrial applications can be produced from primary or secondary metabolism of living organisms (plants, animals or microorganisms). Owing to technical improvements in screening programs, and separation and isolation techniques, the number of natural compounds discovered exceeds 1 million. 3 Among them, 50-60% are produced by plants (alkaloids, flavonoids, terpenoids, steroids, carbohydrates, etc.) and 5% have a microbial origin. Of all the reported natural products, approximately 20-25% show biological activity, and of these approximately 10% have been obtained from microbes. Furthermore, from the 22 500 biologically active compounds that have been obtained so far from microbes, 45% are produced by actinomycetes, 38% by fungi and 17% by unicellular bacteria. 3 The increasing role of microorganisms in the production of antibiotics and other drugs for treatment of serious diseases has been dramatic. However, the development of resistance in microbes and tumor cells has become a major problem and requires much research effort to combat it. Drugs of natural origin have been classified as (i) original natural products, (ii) products derived or chemically synthesized from natural products or (iii) synthetic products based on natural product structures. Evidence of the importance of natural products in the discovery of leads for the development of drugs for the treatment of human diseases is provided by the fact that close to half of the best selling pharmaceuticals in 1991 were either natural products or their derivatives. 4 In this regard, of the 25 top-selling drugs reported in 1997, 42% were natural products or their derivatives and of these, 67% were antibiotics. Today, the structures of around 140 000 secondary metabolites have been elucidated. It is important to understand that many chemically synthesized drugs owe their origin to natural sources. Applications of chemically synthesized natural metabolites include the use of a natural product derived from plant salicyclic acid derivatives present in white willow, wintergreen and meadowsweet to relieve pain and suffering. Concoctions of these plants were administered by Hippocrates back in the year 500 BC, and even earlier in Egypt and Babylonia, for fever, pain and childbirth. Synthetic salicylates were produced initially by Bayer in 1874, and later in 1897, Arthur Eichengrun at Bayer discovered that an acetyl derivative (aspirin), reduced acidity, bad taste and stomach irritation. These plant-based systems continue to play an essential role in health care, and it has been estimated by the World Health Organization (WHO) that approximately 80% of the world's inhabitants rely mainly on traditional medicines for their primary health care. 5 Other synthesized compounds originating from natural products include a nonapeptide, designated teprotide, which was isolated from the venom of the Brazilian pit viper Bothrops jararaca. 6 This led to the design and synthesis of angiotensin-converting enzyme (ACE) inhibitors such as captopril, which was the first marketed, orally active ACE inhibitor. 7 Enalapril, another ACE inhibitor used in the treatment of cardiovascular disease, was approved for marketing by the Food and Drug Administration (FDA) in 1985. 6 The alkaloid quinine, the active constituent of the 'fever tree' Cinchona succirubra, has been known for centuries by South American Indians to control malaria. During the twentieth century, massive programs to synthesize quinoline derivatives, based on the quinine prototype, were carried out. The first of the new quinolones to be used clinically as an antibacterial agent was nalidixic acid, which emerged as part of a large chemical synthesis program developed at the Sterling Winthrop Research Institute. 8, 9 The program was begun when 7-chloro-1,4-dihydro-1-ethyl-4-oxoquinolone-3-carboxylic acid was obtained as a side product during purification of chloroquine and found to have antibacterial activity. The best compound found in the program was nalidixic acid, which had remarkable activity against Gram-negative bacteria and was shown to be an inhibitor of DNA gyrase. Its discovery led to a whole series of synthetic quinolone and fluoroquinolone antibiotics (pefloxacin, norfloxacin, ciprofloxacin, levofloxacin, ofloxacin, lomefloxacin, sparfloxacin, etc.), which have been very successful in medicine and have achieved major commercial success (Table 1) . It is important to appreciate that all quinolones, though synthetic, are based on the structure of the natural plant product quinine. Secondary metabolites have exerted a major impact on the control of infectious diseases and other medical conditions, and the development of pharmaceutical industry. Their use has contributed to an increase in the average life expectancy in the USA, which increased from 47 years in 1900 to 74 years (in men) and 80 years (in women) in 2000. 11 Probably, the most important use of secondary metabolites has been as anti-infective drugs. In 2000, the market for such antiinfectives was US$55 billion (Table 1 ) and in 2007 it was US$66 billion. Table 1 shows that, among the anti-infective drugs, antivirals represent more than 20% of the market. Two antivirals that are chemically synthesized today were originally isolated from marine organisms. They are acyclovir (active against the herpes virus by inhibition and inactivation of DNA polymerase) and cytarabine (active against non-Hodgkin's lymphoma). Both compounds are nucleoside analog drugs, originally isolated from sponges. 12 Other antiviral applications of natural compounds are related to human immunodeficiency virus (HIV) treatment. In the pathogenesis of this disease, HIV-1, similar to other retroviruses, depends on its stable integration into the host genome to facilitate efficient replication of the viral RNA and maintenance of the infected state. Therefore, de novo viral DNA synthesized during reverse transcription is immediately integrated into the host cell DNA (through the integration step), allowing for further transcription of viral RNA. In the late phase of HIV viral replication, the large precursor polyprotein (gag-pol precursor, Pr 160) must be appropriately cleaved by a viral protease. The cleavage of the gag precursor protein of HIV is critical for the maturation and infectivity of the viral particle. Without the appropriate cleavage of the precursor polyproteins, non-infectious viral particles are generally produced. To confront this problem, a tremendous effort has been made at the US National Cancer Institute (NCI), in search of natural metabolites capable of inhibiting HIV reverse transcriptase and HIV protease. Chemically synthesized derivatives of these compounds are the main agents now used against HIV. Furthermore, reports have been published on natural product inhibitors of HIV integrase obtained from among the marine ascidian alkaloids; that is, the lamellarins (produced by the mollusk Lamellaria sp.), and from terrestrial plants (Baccharis genistelloides and Achyrocline satureioides). The most consistent anti-HIV activity was observed with extracts prepared from several Baccharis species. 13 In addition, NCI has been evaluating the HIV-1 inhibitory activity of pepstatin A, a small pentapeptide produced by several Streptomyces species. It contains a unique hydroxyamino acid, statine, that sterically blocks the active site of HIV-1 protease. 14, 15 REASONS FOR DEVELOPING NEW ANTIBIOTICS New antibiotics that are active against resistant bacteria are required. Bacteria have lived on the Earth for several billion years. During this time, they encountered in nature a wide range of naturally occurring antibiotics. To survive, bacteria developed antibiotic resistance mechanisms. Therefore, it is not surprising that they have become resistant to most of the natural antimicrobial agents that have been developed over the past 50 years. 16 This resistance increasingly limits the effectiveness of current antimicrobial drugs. The problem is not just antibiotic resistance but also multidrug resistance. In 2004, more than 70% of pathogenic bacteria were estimated to be resistant to at least one of the currently available antibiotics. 17 The so-called 'superbugs' (organisms that are resistant to most of the clinically used antibiotics) are emerging at a rapid rate. S. aureus, which is resistant to methicillin, is responsible for many cases of infections each year. The incidence of multidrug-resistant pathogenic bacteria is increasing. The Infectious Disease Society of America (IDSA) reported in 2004 that in US hospitals alone, around 2 million people acquire bacterial infections each year (http://www.idsociety.org/Content.aspx?id¼4682). S. aureus is responsible for half of the hospital-associated infections and takes the lives of approximately 100 000 patients each year in the USA alone. 18 The bacteria produce a biofilm in which they are encased and protected from the environment. Biofilms can grow on wounds, scar tissues and medical implants or devices, such as joint prostheses, spinal instrumentations, catheters, vascular prosthetic grafts and heart valves. More than 70% of the bacterial species producing such biofilms are likely to be resistant to at least one of the drugs commonly used in anti-infectious therapy. 14 called 'nosocomial bacteria.' More than 60% of sepsis cases in hospitals are caused by Gram-negative bacteria. 14 Among them, Pseudomonas aeruginosa accounts for almost 80% of these opportunistic infections. They represent a serious problem in patients hospitalized with cancer, cystic fibrosis and burns, causing death in 50% of cases. Other infections caused by Pseudomonas species include endocarditis, pneumonia and infections of the urinary tract, central nervous system, wounds, eyes, ears, skin and musculoskeletal system. This bacterium is another example of a natural multidrug-resistant microorganism. Although many strains are susceptible to gentamicin, tobramycin and amikacin, resistant forms have also developed. These multidrug-resistant bacteria make hospitals ''dangerous places to be, especially if you are sick, but even if not.'' 19 Although we are seeing a steady increase in resistance in almost every pathogen to most of the current antibiotics over time, not all the antibacterial agents show the same rate of resistance development. For example, antimicrobials such as rifampicin, which targets single enzymes, are most susceptible to the development of resistance, whereas agents that inactivate several targets irreversibly generate resistance more slowly. In addition to the antibiotic-resistance problem, new families of anti-infective compounds are needed to enter the marketplace at regular intervals to tackle the new diseases caused by evolving pathogens. At least 30 new diseases emerged in the 1980s and 1990s and they are growing in incidence. Emerging infectious organisms often encounter hosts with no prior exposure to them and thus represent a novel challenge to the host's immune system. Several viruses responsible for human epidemics have made a transition from animal host to humans and are now transmitted from human to human. HIV, responsible for the acquired immunodeficiency syndrome (AIDS) epidemic, is one example. Although it has not been proven, it is suspected that severe acute respiratory syndrome (SARS), caused by the SARS coronavirus, also evolved from a different species. 20 In the early 1990s, after decades of decline, the incidence of tuberculosis began to increase. The epidemic took place owing to inadequate treatment regimens, a diminished public health system and the onset of the HIV/AIDS epidemic. The WHO has predicted that between 2000 and 2020, nearly 1 billion people will become infected with Mycobacterium tuberculosis and that this disease will cost the lives of 35 million people. Sexually transmitted diseases have also increased during these decades, especially in young people (aged 15-24 years). The human papillomavirus, chlamydia, genital herpes, gonorrhea and HIV/AIDS are examples. HIV/AIDS has infected more than 40 million people in the world. Together with other diseases such as tuberculosis and malaria, HIV/AIDS accounts for over 300 million illnesses and more than 5 million deaths each year. Additional evolving pathogens include the (i) Ebola virus, which causes the viral hemorrhagic fever syndrome with a resultant mortality rate of 88%; (ii) the bacterium Legionella pneumophila, a ubiquitous aquatic organism that lives in warm environments, which causes Legionnaire's disease, a pulmonary infection; (iii) the Hantavirus, which can infect humans with two serious illnesses: hemorrhagic fever with renal syndrome and Hantavirus pulmonary syndrome; (iv) at least three species of bacteria from the genus Borrelia, which cause Lyme disease, an emerging infection. In this case, the infection is acquired from the bite of ticks belonging to several species of the genus Ixodes. Borrelia burgdorferi is the predominant cause of Lyme disease in the US, whereas Borrelia afzelii and Borrelia garinii are implicated in most European cases. The disease presentation varies widely, and may include a rash and flu-like symptoms in its initial stage, followed by musculoskeletal, arthritic, neurologic, psychiatric and cardiac manifestations. In the majority of cases, symptoms can be eliminated with antibiotics, especially if the treatment begins early in the course of illness. However, late or inadequate treatment can lead to 'late-stage' Lyme disease that can be disabling and difficult to treat. 21 (v) Other evolving pathogens include the Escherichia coli 0157:H7 (enterohemorrhagic E. coli), a strain that causes colitis and bloody diarrhea by producing a toxin called Shiga toxin, which damages the intestines. It is estimated that this bacterium causes infection in more than 70 000 patients a year in the USA. Another example is (vi) Cryptosporidium, an obligate intracellular parasite commonly found in lakes and rivers. Cryptosporidium parvum is one of the common species affecting the digestive and respiratory organs. Intestinal cryptosporidiosis is characterized by severe watery diarrhea. Pulmonary and tracheal cryptosporidiosis in humans is associated with coughing and is frequently a low-grade fever. People with severely weakened immune systems are likely to have more severe and more persistent symptoms than healthy individuals. In the developing world, nearly 90% of the infectious disease deaths are caused by six diseases or disease processes: acute respiratory infections, diarrhea, tuberculosis, HIV, measles and malaria. In both the developing and developed nations, the leading cause of death by a wide margin is acute respiratory disease. In the developing world, acute respiratory infections are attributed primarily to seven bacteria: Bordetella pertussis, Streptococcus pneumoniae, Haemophilus influenzae, Staphylococcus aureus, Mycoplasma pneumoniae, Chlamydophila pneumoniae and Chlamydia trachomatis. In addition, the major viral causes of respiratory infections include respiratory syncytial virus, human parainfluenza viruses 1 and 3, influenza viruses A and B, as well as some adenoviruses. These diseases are highly destructive in economic and social as well as in human terms and cause approximately 17 million deaths per year, and innumerable serious illnesses besides affecting the economic growth, development and prosperity of human societies. 22 Morse 23 identified six general factors in the emergence of infectious diseases: ecological changes, human demographics and behavior, international travel, technology and industry, microbial adaptation and change, and breakdown in public health measures. 24 One additional reason for developing new antibiotics is related to their own toxicity. As with other therapeutic agents, the use of antibiotics may also cause side effects in patients. These include mild reactions such as upset stomach, vomiting and diarrhea (cephalosporins, macrolides, penicillins and tetracyclines), rash and other mild and severe allergic reactions (cephalosporins and penicillins), sensitivity to sunlight (tetracyclines), nervousness, tremors and seizures (quinolones). Some side effects are more severe and, depending on the antibiotic, may disrupt the hearing function (aminoglycosides), kidneys (aminoglycosides and polypeptides) or liver (rifampin). During recent decades, we have seen an increasing number of reports on the progressive development of bacterial resistance to almost all available antimicrobial agents. In the 1970s, the major problem was the multidrug resistance of Gram-negative bacteria, but later in the 1980s the Gram-positive bacteria became important, including methicillin-resistant staphylococci, penicillin-resistant pneumococci and vancomycin-resistant enterococci. 25 In the past, the solution to the problem has depended primarily on the development of novel antimicrobial agents. However, the number of new classes of antimicrobial agents being developed has decreased dramatically in recent years. The advent of resistant Gram-positive bacteria has been noticed by the pharmaceutical, biotechnology and academic communities. Some of these groups are making concerted efforts to find novel antimicrobial agents to meet this need. A new glycopeptide antibiotic, teicoplanin, was developed against infections with resistant Gram-positive bacteria, especially bacteria resistant to the glycopeptide vancomycin. In another instance, the approach involved the redesign of a mixture of two compounds, called streptogramin, into a new mixture, called pristinamycin, to allow administration of the drug parenterally and in higher doses than the earlier oral preparation. 26 The two components of streptogramin, quinupristin and dalfopristin, were chemically modified to allow intravenous administration. The new combination, pristinamycin, was approved by the FDA for use against infections caused by vancomycin-resistant Enterococcus faecium. Additional moves against resistant microorganisms are the glycylcyclines developed to treat tetracycline-resistant bacteria. These modified tetracyclines show potent activity against a broad spectrum of Gram-positive and Gram-negative bacteria, including strains that carry the two major tetracycline-resistance determinants, involving efflux and ribosomal protection. Two of the glycylcyline derivatives, DMG-MINO and DMG-DMDOT, have been tested against a large number of clinical pathogens isolated from various sources. The spectrum of activity of these compounds includes organisms with resistance to antibiotics other than tetracyclines; for example, methicillin-resistant staphylococci, penicillin-resistant S. pneumoniae and vancomycin-resistant enterococci. 27 Tigecycline was approved by the FDA in 2005 as an injectable antibiotic. 28 Among the novel class of antimicrobial agents used in treating resistance to Gram-positive infections, we can also mention the cyclic lipopeptide antibiotic daptomycin produced by Streptomyces roseosporus. This compound was approved in 2003 by the FDA for skin infections resulting from complications following surgery, diabetic foot ulcers and burns. It represents the first new natural antibiotic approved in many years. Its mode of action is distinct from any other approved antibiotic: it rapidly kills Gram-positive bacteria by disrupting multiple aspects of bacterial membrane function (by binding irreversibly to the bacterial cell membrane, causing membrane depolarization, destroying the ion concentration gradient and provoking the efflux of K + ). It acts against most clinically relevant Gram-positive bacteria (Staphylococcus aureus, Streptococcus pyogenes, Streptococcus agalactiae, Streptococcus dysgalactiae subsp. equisimilis and Enterococcus faecalis), and retains in vitro potency against isolates resistant to methicillin, vancomycin and linezolid. Traditionally, these infections were treated with penicillin and cephalosporins, but resistance to these agents became widespread. [29] [30] [31] [32] Daptomycin seems to have a favorable side effect profile, and it might be used to treat patients who cannot tolerate other antibiotics. Telithromycin, a macrolide antibiotic, is the first orally active compound of a new family of antibacterials named the ketolides. It shows potent activity against pathogens implicated in communityacquired respiratory tract infections, irrespective of their b-lactam, macrolide or fluoroquinolone susceptibility. Some of the microorganisms susceptible to this antibiotic are pneumococci, H. influenzae and Moraxella catarrhalis, including b-lactamase-positive strains. In addition, telithromycin has a very low potential for selection of resistant isolates or induction of cross-resistance found with other macrolides. 33 Clavulanic acid, first detected in Streptomyces clavuligerus, contains a bicyclic b-lactam ring fused to an oxazolidine ring with an oxygen in place of a sulfur, a b-hydroxyethylidene substituent at C-2 and no acylamino group at C-6. It was first described in 1976 and shown to be a potent inhibitor of the b-lactamases produced by staphylococci and plasmid-mediated b-lactamases of E. coli, Klebsiella, Proteus, Shigella, Pseudomonas and Haemophilus. Although it is a broad-spectrum antibiotic, clavulanic acid possesses only very low antibacterial activity. Therefore, the molecule has been combined, as a b-lactamase inhibitor, with a variety of broad-spectrum semisynthetic penicillins. For example, when administered with amoxicillin, it is used for the treatment of infections caused by b-lactamase-producing pathogenic bacteria. 34 It has world sales of over US$1 billion, and in 1995 it was the second largest selling antibacterial drug. Clavulanic acid can also be combined with ticarcillin, which is a penicillin effective against organisms such as E. coli, Proteus, Salmonella, Haemophilus, Pseudomonas and S. aureus. It is normally used in hospitals for treating severe infections affecting blood or internal organs, bones and joints, upper or lower airways or skin and soft tissue. The combination extends ticarcillin antimicrobial activity by inhibiting the action of the b-lactamases produced by certain bacteria. Mycosis is a condition in which fungi pass the resistance barriers of the human or animal body and establish infections. These organisms are harmless most of the time, but sometimes they can cause fungal infections. In most cases, these infections are not life threatening. However, when they are deeply invasive and disseminated, they lead to more serious infections, particularly in critically ill patients, elderly people and those who have conditions that affect the immune system (by disease or through the use of immunosuppressive agents). In addition, the use of antineoplastic and broad-spectrum antibiotics, prosthetic devices and grafts, and more aggressive surgery has increased invasive fungal infections. Patients with burns, neutropenia, pancreatitis or after organ transplantation (40% of liver transplants, 15-35% of heart transplants and 5% of kidney transplants) are also predisposed to fungal infection. 35 Approximately 40% of death from nosocomial infections are caused by fungi, and 80% of these are caused by Candida and Aspergillus, although Cryptococcus spp., Fusarium spp., Scedosporium spp., Penicillium spp. and zygomycetes are increasingly involved. 36 Pulmonary aspergillosis is the main factor involved in the death of recipients of bone marrow transplants, and Pneumocystis carinii is the leading cause of death in AIDS patients from Europe and North America. 37 The rising incidence of invasive fungal infections and the emergence of broader fungal resistance have led to the need for novel antifungal agents. Amphotericin B is the first-line therapy for systemic infection because of its broad spectrum and fungicidal activity. However, considerable side effects limit its clinical utility. Echinocandins are large lipopeptide molecules that inhibit the synthesis of 1,3-b-Dglucan, a key component of the fungal cell wall. Three echinocandins (caspofungin, micafungin and anidulafungin) have reached the market. Caspofungin is also known as pneumocandin or MK-0991. This compound was the first cell-wall-active antifungal approved as a new injectable antifungal; this was in 2000. 38 It irreversibly inhibits 1,3-b-D-glucan synthase, preventing the formation of glucan polymers and disrupting the integrity of fungal cell walls. 39 It is more active and less toxic than amphotericin B and shows a broad spectrum of activity against Candida (including fluconozole resistance), Aspergillus, Histoplasma and P. carinii, the major cause of HIV death. Micafungin is licensed for clinical use in Asian countries and in the US. This compound exhibits extremely potent antifungal activity against clinically important fungi, including Aspergillus and azole-resistant strains of Candida. In animal studies, micafungin is as efficacious as amphotericin B with respect to improvement of survival rate. It is characterized by a linear pharmacokinetic profile and substantially fewer toxic effects. Anidulafungin is currently licensed in the US. 40 Although several new antifungal drugs have been developed in the past 6 years, some patients remain resistant to treatments. The main reasons for this include intrinsic or acquired antifungal resistance, organ dysfunction preventing the use of some agents and drug interactions. In addition, some drugs penetrate poorly into sanctuary sites, including the eye and urine, and others are associated with considerable adverse events. However, there has been some progress. Posaconazole is a new member of the triazole class of antifungals. It has shown clinical efficacy in the treatment of oropharyngeal candidiasis and has potential as a salvage therapy for invasive aspergillosis, zygomycosis, cryptococcal meningitis and a variety of other fungal infections. It is available as an oral suspension and has a favorable toxicity profile. The wide spectrum of posaconazole activity in in vitro studies, animal models and preliminary clinical studies suggests that it represents an important addition to the antifungal armamentarium. 41 In addition to the screening programs for antibacterial activity, the pharmaceutical industry has extended these programs to other disease areas. 42, 43 Microorganisms are a prolific source of structurally diverse bioactive metabolites and have yielded some of the most important products of the pharmaceutical industry. Microbial secondary metabolites are now being used for applications other than antibacterial, antifungal and antiviral infections. For example, immunosuppressants have revolutionized medicine by facilitating organ transplantation. 44 Other applications include antitumor drugs, enzyme inhibitors, gastrointestinal motor stimulator agents, hypocholesterolemic drugs, ruminant growth stimulants, insecticides, herbicides, coccidiostats, antiparasitics vs coccidia, helminths and other pharmacological activities. Further applications are possible in various areas of pharmacology and agriculture, developments catalyzed by the use of simple enzyme assays for screening before testing in intact animals or in the field. In the year 2000, approximately 10 million new cases of cancer were diagnosed in the world, resulting in 6 million cancer-related deaths. The tumor types with the highest incidence were lung (12.3%), breast (10.4%) and colorectal (9.4%). 45 Microbial metabolites are among the most important of the cancer chemotherapeutic agents. They started to appear around 1940 with the discovery of actinomycin and since then many compounds with anticancer properties have been isolated from natural sources. More than 60% of the current compounds with antineoplasic activity were originally isolated as natural products or are their derivatives. Among the approved products deserving special attention are actinomycin D, anthracyclines (daunorubicin, doxorubicin, epirubicin, pirirubicin and valrubicin), bleomycin, mitosanes (mitomycin C), anthracenones (mithramycin, streptozotocin and pentostatin), enediynes (calcheamycin), taxol and epothilones. Actinomycin D is the oldest microbial metabolite used in cancer therapy. Its relative, actinomycin A, was the first antibiotic isolated from actinomycetes. The latter was obtained from Actinomyces antibioticus (now Streptomyces antibioticus) by Waksman and Woodruff. 46 As it binds DNA at the transcription initiation complex, it prevents elongation by RNA polymerase. This property, however, confers some human toxicity and it has been used primarily as an investigative tool in the development of molecular biology. Despite the toxicity, however, it has served well against Wilms tumor in children. The anthracyclines are some of the most effective antitumor compounds developed, and are effective against more types of cancer than any other class of chemotherapy agents. 47 They are used to treat a wide range of cancers, including leukemias, lymphomas, and breast, uterine, ovarian and lung cancers. Anthracyclines act by intercalating DNA strands, which result in a complex formation that inhibits the synthesis of DNA and RNA. It also triggers DNA cleavage by topoisomerase II, resulting in mechanisms that lead to cell death. In their cytotoxic effects, the binding to cell membranes and plasma proteins plays an important role. Their main adverse effects are heart damage (cardiotoxicity), which considerably limits their usefulness, and vomiting. The first anthracycline discovered was daunorubicin (daunomycin) in 1966, which is produced naturally by Streptomyces peucetius. Doxorubicin (adriamycin) was developed in 1967. Another anthracycline is epirubicin. This compound, approved by the FDA in 1999, is favored over doxorubicin in some chemotherapy regimens as it appears to cause fewer side effects. Epirubicin has a different spatial orientation of the hydroxyl group at the 4¢ carbon of the sugar, which may account for its faster elimination and reduced toxicity. Epirubicin is primarily used against breast and ovarian cancer, gastric cancer, lung cancer and lymphomas. Valrubicin is a semisynthetic analog of doxorubicin approved as a chemotherapeutic drug in 1999 and used to treat bladder cancer. Bleomycin is a non-ribosomal glycopeptide microbial metabolite produced as a family of structurally related compounds by the bacterium Streptomyces verticillus. First reported by Umezawa et al. 48 in 1966, bleomycin obtained FDA approval in 1973. When used as an anticancer agent (inducing DNA strand breaks), the chemotherapeutic forms are primarily bleomycins A2 and B2. Mitosanes are composed of several mitomycins that are formed during the cultivation of Streptomyces caespitosus. Although the mitosanes are excellent antitumor agents, they have limited utility owing to their toxicity. Mitomycin C was approved by the FDA in 1974, showing activity against several types of cancer (lung, breast, bladder, anal, colorectal, head and neck), including melanomas and gastric or pancreatic neoplasms. 49 Recently, mitomycin dimers have been explored as potential alternatives for lowering toxicity and increasing efficiency. 50 Mithramycin (plicamycin) is an antitumor aromatic polyketide produced by Streptomyces argillaceous that shows antibacterial and antitumor activity. 51 It is one of the older chemotherapy drugs used in the treatment of testicular cancer, disseminated neoplasms and hypercalcemia. It binds to G-C-rich DNA sequences, inhibiting the binding of transcription factors such as Sp1, which is believed to affect neuronal survival/death pathways. It may also indirectly regulate gene transcription by altering histone methylation. With repeated use, organotoxicity (kidney, liver and hematopoietic system) can become a problem. Streptozotocin is a microbial metabolite with antitumor properties, produced by Streptomyces achromogenes. Chemically, it is a glucosamine-nitroso-urea compound. As with other alkylating agents in the nitroso-urea class, it is toxic to cells by causing damage to DNA, although other mechanisms may also contribute. The compound is selectively toxic to the b-cells of the pancreatic islets. It is similar enough to glucose to be transported into the cell by the glucose transport protein of these cells, but it is not recognized by the other glucose transporters. As b-cells have relatively high levels of glucose permease, the relative streptozotocin toxicity for these b-cells can be explained. 52 In 1982, FDA granted approval for streptozotocin as a treatment for pancreatic islet cell cancer. Pentostatin (deoxycoformycin) is an anticancer chemotherapeutic drug produced by S. antibioticus. It is classified as a purine analog, which mimics the nucleoside adenosine and thus tightly binds and inhibits adenosine deaminase (K i of 2.5Â10 À12 M). It interferes with the cell's ability to process DNA. 53 Pentostatin is commonly used to treat hairy cell leukemia, acute lymphocytic leukemia, prolymphocytic leukemia (B-and T-cell origin), T-cell leukemia and lymphoma. However, it can cause kidney, liver, lung and neurological toxicity. 54 The FDA granted approval for pentostatin in 1993. Calicheamicins are highly potent antitumor microbial metabolites of the enediyne family produced by Micromonospora echinospora. Their antitumor activity is apparently due to the cleavage of double-stranded DNA. 55 They are highly toxic, but it was possible to introduce one such compound into the clinic by attaching it to an antibody that delivered it to certain cancer types selectively. This ingenious idea of the Wyeth Laboratories avoided the side effects of calicheamicin. In this regard, gemtuzumab is effective against acute myelogenous leukemia (AML). Calicheamicin is bound to a monoclonal antibody against a transmembrane receptor (CD33) expressed on cells of monocytic/myeloid lineage. CD33 is expressed in most leukemic blast cells, but in normal hematopoietic cells the intensity diminishes with maturation. It was approved by the FDA for use in patients over the age of 60 years with relapsed AML who are not considered candidates for standard chemotherapy. 56 A successful non-actinomycete molecule is taxol (paclitaxel), which was first isolated from the Pacific yew tree, Taxus brevifolia, but is also produced by the endophytic fungi Taxomyces andreanae and Nodulisporium sylviforme. 57 This compound inhibits rapidly dividing mammalian cancer cells by promoting tubulin polymerization and interfering with normal microtubule breakdown during cell division. The drug also inhibits several fungi (Pythium, Phytophthora and Aphanomyces) by the same mechanism. In 1992, taxol was approved for refractory ovarian cancer, and today it is used against breast and advanced forms of Kaposi's sarcoma. 58 A new formulation is available in which paclitaxel is bound to albumin. Taxol sales amounted to US$1.6 billion in 2006 for Bristol Myers-Squibb, representing 10% of the company's pharmaceutical sales and its third largest selling product. Currently, taxol production uses plant cell fermentation technology. The epothilones (a name derived from its molecular features: epoxide, thiazole and ketone) are macrolides originally isolated from the broth of the soil myxobacterium Sorangium cellulosum as weak agents against rust fungi. 59 They were identified as microtubulestabilizing drugs, acting in a similar manner to taxol. 60,61 However, they are generally 5-25 times more potent than taxol in inhibiting cell growth in cultures. Five analogs are now undergoing investigation as candidate anticancer drugs, and their preclinical studies have indicated a broad spectrum of antitumor activity, including taxol-resistant tumor cells. With the best currently available therapies, the median survival time for patients with metastatic breast cancer is only 2-3 years, and many patients develop resistance to taxanes or other chemotherapy drugs. One epothilone, ixabepilone, was approved in October 2007 by the FDA for use in the treatment of aggressive metastatic or locally advanced breast cancer no longer responding to currently available chemotherapies. 62 In tumor cells, p-glycoprotein reduces intracellular antitumor drug concentrations, thereby limiting access of chemotherapeutic substrates to the site of action. The epothilones are attractive because they are active against p-glycoprotein-producing tumors and have good solubility. 62 Epothilone B is a 16-membered polyketide macrolactone with a methylthiazole group connected to the macrocycle by an olefinic bond. Testicular cancer is the most common cancer diagnosis in men between the ages of 15 and 35 years, with approximately 8000 cases detected in the United States annually. 63 The majority (95%) of testicular neoplasms are germ cell tumors, which are relatively uncommon carcinomas, accounting for only 1% of all male malignancies. Remarkable progress has been made in the medical treatment of advanced testicular cancer, with a substantial increase in cure rates from approximately 5% in the early 1970s to almost 90% today. 64, 65 This cure rate is the highest of any solid tumor, and improved survival is primarily due to effective chemotherapy. A major advance in chemotherapy for testicular germ cell tumors was the introduction of cisplatin in the mid-1970s. Two chemotherapy regimens are effective for patients with a good testicular germ cell tumor prognosis: four cycles of etoposide and cisplatin or three cycles of bleomycin, etoposide and cisplatin. 66 Of the latter three agents, bleomycin and etoposide are natural products. Enzyme inhibitors have received increasing attention as useful tools, not only for the study of enzyme structures and reaction mechanisms but also for potential utilization in medicine and agriculture. Several enzyme inhibitors with various industrial uses have been isolated from microbes. 67 The most important are (1) clavulanic acid, the inhibitor of b-lactamases discussed above in the section 'Moves against antibiotic resistance development in bacteria,' and the statins, hypocholesterolemic drugs presented below in the section 'Hypocholesterolemic drugs.' Some of the common targets for other inhibitors are glucosidases, amylases, lipases, proteases and xanthine oxidase (XO). Acarbose is a pseudotetrasaccharide made by Actinoplanes sp. SE50. It contains an aminocyclitol moiety, valienamine, which inhibits intestinal a-glucosidase and sucrase. This results in a decrease in starch breakdown in the intestine, which is useful in combating diabetes in humans. 68 Amylase inhibitors are useful for the control of carbohydratedependent diseases, such as diabetes, obesity and hyperlipemia. 69, 70 Amylase inhibitors are also known as starch blockers because they contain substances that prevent dietary starches from being absorbed by the body. The inhibitors may also be useful for weight loss, as some versions of amylase inhibitors do show potential for reducing carbohydrate absorption in humans. 71, 72 The use of amylase inhibitors for the treatment of rumen acidosis has also been reported. 73 Examples of microbial a-amylase inhibitors are paim, obtained from culture filtrates of Streptomyces corchorushii, 74 and TAI-A, TAI-B, oligosaccharide compounds from Streptomyces calvus TM-521. 75 Lipstatin is a pancreatic lipase inhibitor produced by Streptomyces toxytricini that is used to combat obesity and diabetes. It interferes with the gastrointestinal absorption of fat. 76 The commercial product is tetrahydrolipstatin, which is also known as orlistat. In the pathogenic processes of some diseases, such as emphysema, arthritis, pancreatitis, cancer and AIDS, protease inhibitors are potentially powerful tools for inactivating target proteases. Examples of microbial products include antipain, produced by Streptomyces yokosukaensis, leupeptin from Streptomyces roseochromogenes and chymostatin from Streptomyces hygroscopicus. 70 Leupeptin is produced by more than 17 species of actinomycetes. 67 XO catalyzes the oxidation of hypoxanthine to uric acid through xanthine. An excessive accumulation of uric acid in the blood, called hyperuricemia, causes gout. 77 The inhibitors of XO decrease the uric acid levels, which result in an antihyperuricemic effect. A potent inhibitor of XO, hydroxyakalone, was purified from the fermentation broth of Agrobacterium aurantiacum sp. nov., a marine bacterial strain. 78 Fungal products are also used as enzyme inhibitors against cancer, diabetes, poisonings, Alzheimer's disease, etc. The enzymes inhibited include acetylcholinesterase, protein kinase, tyrosine kinase, glycosidases and others. 79 Immunosuppresants Suppressor cells are critical in the regulation of the normal immune response. An individual's immune system is capable of distinguishing between native and foreign antigens and of mounting a response only against the latter. A major role has been established for suppressor T lymphocytes in this phenomenon. Suppressor cells also play a role in regulating the magnitude and duration of the specific antibody response to an antigenic challenge. Suppression of the immune response either by drugs or by radiation, to prevent the rejection of grafts or transplants or to control autoimmune diseases, is called immunosuppression. A number of microbial compounds capable of suppressing the immune response have been discovered. Cyclosporin A was originally introduced as a narrow-spectrum antifungal peptide produced by the mold, Tolypocladium nivenum (originally classified as Trichoderma polysporum and later as Tolypocladium inflatum), by aerobic fermentation. Cyclosporins are a family of neutral, highly lipophilic, cyclic undecapeptides containing some unusual amino acids, synthesized by a non-ribosomal peptide synthetase, cyclosporin synthetase. Discovery of the immunosuppressive activity led to its use in heart, liver and kidney transplants and to the overwhelming success of the organ transplant field. 80 Cyclosporin was approved for use in 1983. It is thought to bind to the cytosolic protein cyclophilin (immunophilin) of immunocompetent lymphocytes, especially T lymphocytes. This complex of cyclosporin and cyclophilin inhibits calcineurin, which under normal circumstances is responsible for activating the transcription of interleukin-2. It also inhibits lymphokine production and interleukin release and therefore leads to a reduced function of effector T cells. Sales of cyclosporin A have reached US$1.5 billion per year. Other important transplant agents include sirolimus (rapamycin) and tacrolimus (FK506), which are produced by actinomycetes. Rapamycin is especially useful in kidney transplants as it lacks the nephrotoxicity seen with cyclosporin A and tacrolimus. It is a macrolide, first discovered in 1975 as a product of S. hygroscopicus, and was initially proposed as an antifungal agent. However, this was abandoned when it was discovered that it had potent immunosuppressive and antiproliferative properties. This compound binds to the immunophilin FK506-binding protein (FKBP12), and this binary complex interacts with the rapamycin-binding domain and inactivates a serine-threonine kinase termed the mammalian target of rapamycin. The latter is known to control proteins that regulate mRNA translation initiation and G1 progression. 81 The antiproliferative effect of rapamycin has also been used in conjunction with coronary stents to prevent restenosis, which usually occurs after the treatment of coronary artery disease by balloon angioplasty. Rapamycin also shows promise in treating tuberous sclerosis complex (TSC), a congenital disorder that leaves sufferers prone to benign tumor growth in the brain, heart, kidneys, skin and other organs. In a study of rapamycin as a treatment for TSC, University of California, Los Angeles (UCLA) researchers observed a major improvement in mice regarding retardation related to autism. 82 As rapamycin has poor aqueous solubility, some of its analogs, RAD001 (everolimus), CCI-799 (tensirolimus) and AP23573 (ARIAD), have been developed with improved pharmaceutical properties. Everolimus is currently used as an immunosuppressant to prevent the rejection of organ transplants. Although it does not have FDA approval in the USA, it is approved for use in Europe and Australia, and phase III trials are being conducted in the US. Everolimus may have a role in heart transplantation as it has been shown to reduce chronic allograft vasculopathy in such transplants. 83 Everolimus is also used in drug-eluting coronary stents as an immunosuppressant to prevent rejection. CCI-779 is a rapamycin ester that can be converted to rapamycin in vivo. RAD001 is a rapamycin analog currently being investigated in phase II trials for recurrent endometrial cancer as a single agent, and in phase I/II trials for the treatment of glioblastoma in combination with the inhibitor of certain epidermal growth factor receptor and vascular endothelial growth factor receptor family members. 84 AP23573 is a novel non-prodrug rapamycin analog with a nonlinear pharmacokinetic behavior that has demonstrated antiproliferative activity against several human tumor cell lines in vitro and against experimental tumors in vivo. 85 This agent is currently under evaluation in phase I-II trials, including patients with different tumors. Two additional small-molecule rapamycin analogs, AP23841 and AP23675, are currently in preclinical development for the treatment of bone metastases and primary bone cancer. 86 Tacrolimus (FK506) was discovered in 1987 in Japan. 87 It is produced by Streptomyces tsukubaensis. However, its use was almost abandoned because of dose-associated toxicity. Dr Thomas Starzl (University of Pittsburgh) rescued it by using lower doses, realizing that it was approximately 100 times more active as an immunosuppressive than cyclosporin A. 88 It was introduced in Japan in 1993, and in 1994 it was approved by the FDA for use as an immunosuppressant in liver transplantation. Furthermore, its use has been extended to include bone marrow, cornea, heart, intestines, kidney, lung, pancreas, trachea, small bowel, skin and limb transplants, and for the prevention of graft-vs-host disease. Topically, it is also used against atopic dermatitis, a widespread skin disease. In the laboratory, tacrolimus inhibits the mixed lymphocyte reaction, the formation of interleukin-2 by T lymphocytes, and the formation of other soluble mediators, including interleukin-3 and interferon g. Recently, it has been reported that tacrolimus inhibits tumor growth factor-b-induced signaling and collagen synthesis in human lung fibroblastic cells. This factor plays a pivotal role in tissue fibrosis, including pulmonary fibrosis. Therefore, tacrolimus may be useful for the treatment of pulmonary fibrosis, although its use in the acute inflammatory phase may exacerbate lung injury. 89 Hypocholesterolemic drugs Atherosclerosis is generally viewed as a chronic, progressive disease characterized by the continuous accumulation of atheromatous plaque within the arterial wall. The past two decades have witnessed the introduction of a variety of anti-atherosclerotic therapies. The statins form a class of hypolipidemic drugs used to lower cholesterol by inhibiting the enzyme HMG-CoA reductase, the rate-limiting enzyme of the mevalonate pathway of cholesterol biosynthesis. Inhibition of this enzyme in the liver stimulates low-density lipoprotein (LDL) receptors, resulting in an increased clearance of LDL from the bloodstream and a decrease in blood cholesterol levels. Through their cholesterol-lowering effect, they reduce the risk of cardiovascular disease, prevent stroke and reduce the development of peripheral vascular disease. 90 In addition, they are anti-thrombotic and antiinflammatory. Currently there are a number of statins in clinical use. The entire group of statins reached an annual market of nearly US$30 billion before it became a generic pharmaceutical. The first member of the group (compactin; mevastatin) was isolated as an antibiotic product of Penicillium brevicompactum and later from Penicillium citrinum. Although not of commercial importance, compactin's derivatives achieved overwhelming medical and commercial success. An ethylated form, known as lovastatin (monacolin K; mevinolin), was isolated in the 1970s in the broths of Monascus ruber and Aspergillus terreus. 91 Lovastatin, the first commercially marketed statin, was approved by the FDA in 1987. A semisynthetic derivative of lovastatin is simvastatin, a major hypocholesterolemic drug, selling for US$7 billion per year before becoming generic. Another statin, pravastatin (US$3.6 billion per year), is made through different biotransformation processes from compactin by Streptomyces carbophilus 92 and Actinomadura sp. 93 Other genera involved in the production of statins are Doratomyces, Eupenicillium, Gymnoascus, Hypomyces, Paecilomyces, Phoma, Trichoderma and Pleurotus. 94 A synthetic compound, modeled from the structure of the natural statins, is atorvastin, which has been the leading drug of the entire pharmaceutical industry in terms of market share (approximately US$14 billion per year) for many years. An insecticide is a pesticide used against insects in all developmental forms. They include ovicides and larvicides used against the eggs and larvae of insects, respectively. Insecticides are used in agriculture, medicine, industry and households. The use of insecticides is believed to be one of the major factors behind the increase in agricultural productivity in the twentieth century. Synthetic insecticides pose some hazards, whereas natural insecticides offer adequate levels of pest control and pose fewer hazards. Microbially produced insecticides are especially valuable because their toxicity to non-target animals and humans is extremely low. Compared with other commonly used insecticides, they are safe for both the pesticide users and consumers of treated crops. The action of microbial insecticides is often specific to a single group or species of insects, and this specificity means that most microbial insecticides do not naturally affect beneficial insects (including predators or parasites of pests) in treated areas. The spinosyns (A83543 group) are a group of natural products produced by Saccharopolyspora spinosa that were discovered in 1989. The researchers isolated spinosyn A and D, as well as 21 minor analogs. They are active on a wide variety of insect pests, especially lepidopterans and dipterans, but do not have antibiotic activity. 95 The compounds attack the nervous system of insects by targeting two key neurotransmitter receptors, with no cross-resistance to other known insecticides. The spinosyns are a family of macrolides with 21 carbon atoms, containing four connected rings of carbon atoms at their core to which two deoxysugars (forosamine and 2,3,4, tri-O-methylrhamnose, which are required for bioactivity) are attached. Novel spinosyns have been prepared by biotransformation, using a genetically engineered strain of Saccharopolyspora erythraea. 96 A mixture of spinosyn A (85%) and D (15%) (spinosad) is being produced through fermentation and was introduced to the market in 1997 for the control of chewing insects on a variety of crops. Spinosyn formulations were recently approved for use on organic crops and for animal health applications. Recently, a new naturally occurring series of insect-active compounds was discovered from a novel soil isolate, Saccharopolyspora pogona NRRL30141. 97 The culture produced a unique family of over 30 new spinosyns. They have a butenyl substitution at the 21 position on the spinosyn lactone and are named butenyl-spinosyns or pogonins. Herbicides are chemicals marketed to inhibit or interrupt normal plant growth and development. They are widely used in agriculture, industry and urban areas for weed management. Approximately 30 000 kinds of weeds are widely distributed in the world; yield losses caused by 1800 kinds of weeds are approximately 9.7% of total crop production every year. 98 Herbicides provide cost-effective weed control with a minimum of labor. Most are used on crops planted in large acreages, such as soy, cotton, corn and canola. 99 There are numerous classes of herbicides with different modes of action, as well as different potentials for adverse effects on health and the environment. Over the past century, chemical herbicides, used to control various weeds, may have caused many serious side effects, such as injured crops, threat to the applicator and others exposed to the chemicals, herbicide-resistant weed populations, reduction of soil and water quality, herbicide residues and detrimental effects on non-target organisms. 100 For example, alachlor and atrazine were reported to cause cancer in animal tests. With increasing global environmental consciousness, bioherbicides, which are highly effective for weed control and environmentally friendly as well, are very attractive both for research and for application. Microbial herbicides can be divided into microbial preparations (microorganisms that control weeds) and microbially derived herbicides. The first microbial herbicide was independently discovered in Germany and Japan. In 1972, the Zähner group in Germany isolated phosphinothricin tripeptide, a peptide antibiotic consisting of two molecules of L-alanine and one molecule of the unusual amino acid L-phosphinothricin; that is, N(4[hydroxyl(methyl)phosphinoyl]homoalanyl)alanylalanine. They isolated it from Streptomyces viridochromogenes as a broad-spectrum antibacterial including activity against Botrytis cinerea. 101 In Japan, it was discovered at the Meiji Seiki laboratories in 1973 from S. hygroscopicus and named bialaphos. 102 The bioactive L-phosphinothricin is a structural analog of glutamic acid, acting as a competitive inhibitor of glutamine synthetase, and has bactericidal (Gram-positive and Gram-negative bacteria), fungicidal (B. cinerea) and herbicidal properties. 103 Glufosinate (DL-phosphinothricin) (without Ala-Ala) was developed as a herbicide. Therefore, the agent acts as a herbicide with or without Ala-Ala. Bialaphos has no influence on microorganisms in the soil and is easily degraded in the environment, having a half-life of only 2 h. This low level of environmental impact is of great interest to environmentalists. In 2006, the global animal health market was valued at US$16 billion, of which 29% was derived from parasiticides. Parasites are organisms that inhabit the body and benefit from a prolonged, close association with the host. Antiparasitics are compounds that inhibit the growth or reproduction of a parasite; some antiparasitics directly kill parasites. In general, parasites are much smaller than their hosts, show a high degree of specialization for their mode of life and reproduce more quickly and in greater numbers than their hosts. Classic examples of parasitism include the interactions between vertebrate hosts and such diverse animals as tapeworms, flukes, Plasmodium species and fleas. Parasitic infections can cause potentially serious health problems and even kill the host. Parasites mainly enter the body through the mouth, usually through ingestion of tainted food or drink. This is a very common problem in tropical areas, but is not limited to those regions. There are 3200 varieties of parasites in four major categories: Protozoa, Trematoda, Cestoda and Nematoda. The major groups include protozoans (organisms having only one cell) and parasitic worms (helminths). Each of these can infect the digestive tract, and sometimes two or more can cause infection at the same time. The WHO reported that approximately 25% of the world's population is infected with roundworms. In addition, a major agricultural problem has been the infection of farm animals by worms. The predominant type of antiparasitic screening effort over the years was the testing of synthetic compounds against nematodes, and some commercial products did result. Certain antibiotics were also shown to possess antihelmintic activity against nematodes or cestodes, but these failed to compete with the synthetic compounds. Although Merck had earlier developed a commercially useful synthetic product, thiabendazole, they had enough foresight to examine microbial broths for antihelmintic activity, and found a non-toxic fermentation broth that killed the intestinal nematode Nematosporoides dubius in mice. The Streptomyces avermitilis culture, isolated by Ō mura and coworkers at the Kitasato Institute in Japan, 104 produced a family of secondary metabolites (eight compounds) with both antihelmintic and insecticidal activities. These compounds, named 'avermectins,' are pentacyclic, 16-membered macrocyclic lactones, that harbor a disaccharide of the methylated sugar, oleandrose, with exceptional activity against parasites, especially Nemathelminthes (nematodes) and arthropod parasites (10 times higher than any known synthetic antihelmintic agent). Surprisingly, avermectins lack activity against bacteria and fungi, do not inhibit protein synthesis and are not ionophores. Instead, they interfere with neurotransmission in many invertebrates, causing paralysis and death by neuromuscular attacks. 105 The annual market for avermectins surpasses US$1 billion. They are used against both nematode and arthropod parasites in sheep, cattle, dogs, horses and swine. A semisynthetic derivative, 22,23-dihydroavermectin B1 ('ivermectin') is 1000 times more active than thiabendazole and is a commercial veterinary product. The efficacy of ivermectin has made it a promising candidate for the control of human onchocerciasis and human strongyloidiasis. 106 Another avermectin, called doramectin (or cyclohexyl avermectin B1), produced by 'mutational biosynthesis' was commercialized for use by food animals. 107 A semisynthetic monosaccharide derivative of doramectin called selamectin is the most recently commercialized avermectin, and is active against heartworms (Dirofilaria immitis) and fleas in companion animals. Although the macrocyclic backbone of each of these molecules (ivermectin, doramectin and selamectin) is identical, there are different substitutions at pharmacologically relevant sites such as C-5, C-13, C-22,23 and C-25. 108 The avermectins are closely related to the milbemycins, a group of non-glycosidated macrolides produced by S. hygroscopicus subsp. Aureolacrimosus. 109 These compounds possess activity against worms and insects. Coccidiostats are used for the prevention of coccidiosis in both extensively and intensively reared poultry. Coccidiosis is the name given to a common intestinal disease caused by the invading protozoan parasites of the genus Eimeria that affects several different animal species (cattle, dogs, cats, poultry, etc.). The major damage is caused by the rapid multiplication of the parasite in the intestinal wall and the subsequent rupture of the cells of the intestinal lining, leading to high mortality and severe loss of productivity. Coccidia are obligate intracellular parasites that show host specificity; only cattle coccidia will cause disease in cattle; other species-specific coccidia will not. For many years, synthetic compounds were used to combat coccidiosis in poultry; however, resistance developed rapidly. A solution came on the scene with the discovery of the narrow-spectrum polyether antibiotic monensin, which had extreme potency against the coccidian. 110 Made by Streptomyces cinnamonensis, monensin led the way for additional microbial ionophoric antibiotics, such as lasalocid, narasin and salinomycin. All are produced by various Streptomyces species. They form complexes with the polar cations K + , Na + , Ca 2+ and Mg 2+ , severely affecting the osmotic balance in the parasitic cells and thus causing their death. 111 The widespread use of anticoccidials has revolutionized the poultry industry by reducing the mortality and production losses caused by coccidiosis. Of great interest was another extremely valuable application of monensin; that is, growth promotion in ruminants. Synthetic chemicals had been tested for years to inhibit wasteful methane production by cattle and sheep and increase fatty acid formation (especially propionate) to improve feed efficiency; however, they failed. The solution was monensin, which became a major success as a ruminant growth enhancer. 110 For more than 40 years, certain antibiotics have been used in foodanimal production to enhance feed utilization and weight gain. 112 From a production standpoint, feed antibiotics have been consistently shown to improve animal weight gain and feed efficiency, especially in younger animals. These responses are probably derived from an inhibitory effect on the normal microbiota, which can lead to reduced intestinal inflammation and improved nutrient utilization. 113 Pigs in the USA are exposed to a great variety of antibiotics. These include b-lactam antibiotics (including penicillins), lincosamides and macrolides (including erythromycin and tetracyclines). All these groups have members that are used to treat infections in humans. In addition, bacitracin, flavophospholipol, pleuromutilins, quinoxalines and virginiamycin are utilized as growth stimulants. Flavophospholipol and virginiamycin are also used as growth promoters in poultry. As described above, cattle are also exposed to ionophores such as monensin to promote growth. The Animal Health Institute of America 114 has estimated that without the use of growth-promoting antibiotics, the USA would require an additional 452 million chickens, 23 million more cattle and 12 million more pigs to reach the levels of production attained by the current practices. Considering that animal health research and the development of new anti-infective product discovery have decreased, the discovery of new antibiotics has decreased over the past 15 years, with few new drug approvals. 115 Therefore, it will be incumbent on veterinary practitioners to use the existing products in a responsible manner to ensure their longevity. It remains to be seen what effects the dearth of new antibiotics for veterinary medicine will have on the future practice of veterinary medicine, production agriculture, food safety and public health. 116 Since the 1999 EU decision to prohibit antibiotic use for foodanimal growth promotion, four antibiotic growth promoters have been banned, including the macrolide drugs tylosin and spiramycin. 117 Although macrolides are no longer formally used as 'growth promoters,' their use under veterinary prescription has risen from 23 tons in 1998 to 55 tons in 2001, which suggests that more of them are being used now than before the prohibition. It is well known that the most effective route for feeding is via the gastrointestinal tract. Many critically ill patients who accept early feeding improve their health. In some post-operative patients, gastric stasis and excessive volumes in the stomach increase the risk of aspiration and subsequent pneumonia. On account of the importance of achieving early and adequate nutritional intake, it is common practice in many intensive care units to use drugs to improve gastrointestinal motility. Erythromycin is a macrolide antibiotic with a broad spectrum of activity. It is well recognized that when prescribed, either intravenously or orally, it causes side effects, such as diarrhea, nausea and vomiting. These side effects are, in part, due to the action of erythromycin at motilin receptors in the gut. This makes this antibiotic very attractive to be used in ill patients with gastrointestinal motility problems. There have been some developments on erythromycin analogs that lack antibiotic action but retain action at motilin receptors. These have been named 'motilides.' 118, 119 Recently, an orally active erythromycinderived motilin receptor agonist (mitemcinal) has been tested in patients with idiopathic and diabetic gastroparesis. In both cases, an improvement of gastroparetic symptoms was observed. 120 The 80-year contribution of microorganisms to medicine and agriculture has been overwhelming. However, antibiotic resistance in microbes has created a dangerous situation and the need for new antibiotics is clear. Unfortunately, most of the large pharmaceutical companies have abandoned the search for new antimicrobial compounds. Owing to the economics, they have concluded that drugs directed against chronic diseases offer a better revenue stream than do antimicrobial agents, as for the latter the length of treatment is short and government restriction is likely. Some small pharmaceutical and biotechnology companies are developing antibiotics, but most depend on venture capital rather than sales income, and with the present regulations, they face huge barriers to enter the market. These barriers were raised with the best intentions of ensuring public safety, but will have the opposite effect if they terminate antibiotic development while resistance continues to increase. 121 However, there are some bright possibilities. One of the most promising is the utilization of uncultivated microorganisms. Considering that 99% of the bacteria and 95% of the fungi have not been cultivated in the laboratory, putting efforts into finding means to grow such microorganisms are proceeding and succeeding. 122 Furthermore, researchers are now extracting bacterial DNA from soil and marine habitats, cloning large fragments into, for example, bacterial artificial chromosomes, expressing in a host bacterium and screening the library for new antibiotics. This metagenomic effort is allowing access to a vast untapped reservoir of genetic and metabolic diversity, 123, 124 which could result in the discovery of new and useful natural products. 125 In addition to these two relatively new techniques, the chemical and biological modification of old antibiotics could still supply new and powerful drugs. These comments also apply to non-antibiotics such as antitumor agents and other microbial products.
• • Mortality rates in COVID-19 range from 0.4% to 16 .3% with increased rates in hospitalized patients, the elderly and other vulnerable populations. • • Studies in other seriously ill individuals demonstrate that frailty is a predictor of mortality. • • Frailty, measured by the preadmission Palliative Performance Scale, is independently predictive of mortality in patients admitted with COVID-19. On 11 March 2020, The World Health Organization declared coronavirus disease 2019 (COVID-19) a global pandemic and as of 23 May, 5 million cases had been confirmed. 1 Overall mortality of COVID-19 ranges from 0.4% to 16.3% 2 with increased rates in hospitalized patients, the elderly, and other vulnerable groups. [3] [4] [5] Although advanced age is clearly a predictor of poor outcomes in COVID-19, 6 data to help accurately predict risk are still lacking. In other conditions, frailty has been found to be a predictor of mortality, 7 and its evaluation in COVID-19 has been recommended. 8 However, there has been a paucity of literature evaluating the effect of frailty in patients with COVID-19. Improved ability to identify patients at high risk of death, will improve clinicians' ability to provide appropriate palliative care, including engaging in shared decision-making with our patients about life-sustaining therapies. Many frailty assessments exist. However, many are complicated and hard to determine at the bedside in critically ill patients where history is limited, and patients may not be able to participate physically or even verbally in the assessment. The Palliative Performance Scale, on the other hand, consists of only five domains, which allows it to be easily calculated at the bedside based on history from patients or their families. The Palliative Performance Scale is a validated tool to assess frailty and to prognosticate survival in seriously ill populations. 9, 10 We sought to determine whether a frailty measure, such as this would correlate with mortality. We, therefore, applied the Palliative Performance Scale to patients hospitalized during the initial COVID-19 surge in a public urban hospital. We hypothesized that a low preadmission Palliative Performance Scale score would independently predict mortality in hospitalized patients with COVID-19. We performed a retrospective observational cohort study of all patients with a positive COVID-19 RNA nasopharyngeal swab admitted to an urban public hospital that treats a largely underserved population in Newark, New Jersey from 15 March to 10 April 2020. Study staff abstracted demographic data (age, sex, race/ethnicity, admission source, and insurance status), clinical data (body mass index, Charlson comorbidity index, 11 and preadmission Palliative Performance Scale score) and details of the hospital course (intensive care unit admission, intubation, haemodialysis, discharge disposition and length of stay) from the electronic medical record. The preadmission Palliative Performance Scale was calculated using information available in the medical chart about the patient's performance status prior to admission and contracting COVID-19. Using this information, the score was calculated by a physician member of the study team. To further investigate palliative care processes and interventions, we reviewed the charts for do not resuscitate, do not intubate and comfort measures only orders. The Rutgers New Jersey Medical School Institutional Review Board approved this study. This study was granted a waiver of consent and a waiver of Health Insurance Portability and Accountability Act authorization, since it is a retrospective study that involves no more than minimal risk to subjects (Reference number Pro2019000864; Approved 6 April 2020). The primary outcome of this study was in-hospital mortality. Anderson et al., 12 developed the Palliative Performance Scale to help assess prognosis in cancer patients receiving palliative care and it has since been applied to other seriously ill populations. 9, [13] [14] [15] The score is calculated from five domains: ambulation, activity and evidence of disease, self-care, intake and level of consciousness. Scores range from 0 to 100. Prior studies in seriously ill individuals have used the Palliative Performance Scale as a measure of frailty. In these studies, a score ⩽70 was predictive of inhospital mortality and poor functional outcome at discharge, therefore, we dichotomized Palliative Performance Scale scores as low (⩽70) and high (>70). 12 Palliative Performance Scale scores are easy to determine from interviewing patients or families and can be estimated by reviewing the medical records; patients whose charts did not include sufficient information to calculate the Palliative Performance Scale were excluded. • • Incorporating the Palliative Performance Scale into assessment of patients with COVID-19 can help predict outcomes. • • Improved understanding of mortality risk can help clinicians caring for patients with COVID-19 to discuss prognosis and provide appropriate palliative care including appropriate recommendations about life-sustaining therapy. To evaluate for selection bias in our enrolled patients, we first compared patients with Palliative Performance Scale and without Palliative Performance Scale scores (excluded from the study). We found no statistically significant differences on most patient, clinical and outcome variables; except for the group without Palliative Performance Scale scores had shorter length of stay. Among the study patients (with Palliative Performance Scale scores), we first performed descriptive analyses, using counts and proportions for categorical variables, means and standard deviations for normally distributed continuous variables (age and body mass index), and medians and first and third quartiles for skewed continuous variables (length of hospital stay, length of intensive care unit stay and days on ventilator) for the entire cohort and by low and high Palliative Performance Scale groups. Second, low and high Palliative Performance Scale groups were compared using chi-square test or Fischer's exact test for categorical variables and Student's t-test and the Mann-Whitney U test for continuous variables, which were not and were skewed, respectively. We fit series of logistic regression models beginning with unadjusted model to assess association between Palliative Performance Scale and in-hospital mortality. Adjusted odds ratios for in-hospital mortality were obtained from sequentially fit multivariable logistic regression models by adding the following covariates at each stage: age categories, gender, race/ethnicity; body mass index and Charlson comorbidity index; do not intubate orders; dialysis and insurance. A p-value of 0.05 or less was a priori determined as cut-off value to be used to infer statistically significant associations. All analyses were performed using SAS v9.4 (SAS Institute, Cary NC). Of 443 patients admitted with COVID-19 during the study period, 374 were eligible for inclusion after excluding 61 patients for inability to calculate their Palliative Performance Scale and eight patients who remained in the hospital at the conclusion of the study. Thirty-six percent of patients had low a Palliative Performance Scale (134/374). The low Palliative Performance Scale group was older, predominantly black (78%) and had more comorbidities. High Palliative Performance Scale patients were admitted largely from home (>90%), whereas only 50% of patients with a low score were admitted from home, with most others being transferred from another healthcare facility ( Table 1) . Rates of intensive care unit admission and intubations were similar between the two groups ( Table 2) . A greater percentage of low Palliative Performance Scale patients had do not resuscitate and do not intubate orders placed during their hospitalization. The palliative care team was involved in the care of 28% (95% confidence interval 24%-33%) of all patients and 61% of patients that ultimately died. In-hospital deaths were more common in the low Palliative Performance Scale group (47% (95% confidence interval 39-56%) versus 23% (95% confidence interval 18-28%)). Most (81% of) intubated patients died. Only 1 (3%) of 32 low Palliative Performance Scale patients survived intubation, compared with 29% of patients with a high score. Over half (59%) of low Palliative Performance Scale patients who were intubated were subsequently made comfort measures only compared with 25% among high-score patients. Multivariable logistic regression analyses showed that with a low preadmission Palliative Performance Scale, the odds of dying in the hospital were 2.89 (95% confidence interval 1.42-5.85) times higher than with a high score (Table 3) . This association persisted when adjusting for COVID-19-specific treatments. Among hospitalized patients with COVID-19, mortality was 31% overall, and significantly higher (81%) in intubated patients. Frailty, assessed by low Preadmission Palliative Performance Scale, independently predicted mortality in hospitalized patients. Surprisingly, age and Charlson comorbidity index did not independently predict mortality. The Palliative Performance Scale is a tool that can be easily administered at the bedside on presentation. With the known high rates of mortality, especially in the elderly 3 and those coming from nursing homes with COVID-19, 4 bedside providers frequently had conversations about end of life care and patients elected to be do not intubate early in their hospitalization. Previously, we have used the Palliative Performance Scale to flag patients with high mortality risk for palliative care consultation. However, this study establishes that the Palliative Performance Scale can also be used to help intensive care unit clinicians' ability to prognosticate. During the COVID-19 pandemic, many hospitals in the region, including ours, were operating over capacity with additional makeshift intensive care units set up to care for critically ill patients. Expanded palliative care services were available during the surge response in our hospital. However, most initial goals of care conversations including conversations about withholding life-sustaining treatment occurred with bedside providers. During this time, the palliative care team primarily provided ongoing support for families. Although most patients with a do not intubate order died, the do not intubate order was likely not the cause of their death especially when considering the high rate of mortality in intubated patients. Of note, in other patient populations, early use of palliative care with seriously ill patients, does not increase mortality. 16 Recognizing frailty as a predictor of mortality will help guide conversations and inform recommendations for appropriate palliative care processes including life-sustaining therapy. This short report confirms that low Palliative Performance Scale, a marker of frailty, independently predicts death in COVID-19. This finding is not surprising as many studies have found poor outcomes in the elderly, 5 especially those with comorbidities which both contribute to frailty. Following these initial reports, published guidelines advocate the evaluation of frailty in caring for elderly patients 8,17 yet to our knowledge, there are limited studies evaluating frailty as an independent predictor of outcomes in COVID-19. Although there are a multitude of frailty indexes, many are overly complex and multivariable. This information can be hard to obtain from patients when they are critically ill or inaccurate when relying on families, especially during a pandemic. For example, the frailty index that was used by Bellelli et al. 18 to predict in-hospital mortality in hospitalized patients during COVID-19 includes 43 variables. Although these findings were similar to ours, we argue that the use of such a complex scale during a pandemic is time-consuming and impractical especially with the no visitor policy that has been set at many hospitals in the United States. The time it would take to complete this scale would result in delays in obtaining an assessment and decrease the availability to use frailty as a tool in guiding conversations. The Palliative Performance scale only consists of five domains and can easily be calculated at the bedside in minutes. With early access to the score, clinicians can use these findings to discuss prognosis and guide goals of care conversations. Limitations to this report include that it is a single centre study at an institution that has been heavily affected by COVID-19. However, the degree of our surge heightened our ability to identify patients during a short time period and to observe the correlation between preadmission Palliative Performance Scale and outcomes. In this retrospective study, Palliative Performance Scale was abstracted from the chart and was dependent on accurate documentation. A more accurate assessment could be obtained from direct patient interview. Moreover, the retrospectively calculated scores could have been biased by knowing the outcome, in-hospital mortality, at the time of retrospectively calculating them. It is, therefore, a major limitation that the scores were not calculated at admission, rather were calculated retrospectively. However, we hypothesize that any bias introduced would be towards a higher score based on data supporting patient's ability to meet each of the thresholds for score (walking, activities of daily living, etc.), therefore, strengthening our findings. Furthermore, in dichotomizing the variable, there would be less opportunity for error. In conclusion, patients admitted with COVID-19 and low preadmission Palliative Performance Scale are nearly three times more likely to die in the hospital and survivors are more likely to be discharged to a facility. Incorporating the Palliative Performance Scale into the initial assessment of all patients with COVID-19 could help predict outcomes. Moreover, improved predictors of mortality will help clinicians caring for patients with COVID-19 to discuss prognosis and make informed decisions about lifesustaining therapy.
The pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), still causing severe illness and death across the world as of June 2020, has been described as the worst acute epidemic to hit humanity since the 1918's "Spanish" influenza [1] . The SARS-CoV-2 pandemic is estimated to have caused close to 430,000 deaths and to have infected up to eight million people until now [2] , but both tallies are expected to continue to grow for at least some more months, due to the ongoing disease spreading in several parts of the world. and causes inflammation and cell death there. The virus then diffuses out in the body and can damage other vital organs, triggering a complex spectrum of J o u r n a l P r e -p r o o f can most accurately be called "dried matrix spotting" (DMS) to reflect its wider usage and applicability. Collectively, DMS represent the large majority of microsampling procedures applied to bioanalysis, with DBS being the most frequent, well-known and widely understood kind [10] . One of the most important issues arising from the application of DMS is sampling volume variability. In fact, when spotting a biological fluid on an absorbing cellulosic support, both matrix viscosity and surface tension influence sample volume and spot area, and these characteristics in turn can produce unwanted and hardly controllable variability in analysis results. Moreover, spotting on paper also produces sample inhomogeneity due to chromatographic effects during fluid absorption [11] . These problems are always present in DBS, due to the nature of the spotting support, the presence of erythrocytes and haematocrit (HCT) variability [12] ; however, other DMS kinds can be affected by them. Due to volume variability, DMS usually need special precautions or procedures to produce reliable quantitative results. Over the years, several other microsampling techniques have been developed, proposed and implemented; some of them were direct answers to DMS volume variability and sample inhomogeneity problems, while others were completely unrelated. Among the former, one can cite volumetric absorptive microsampling (VAMS), capillary sampling (such as hemaPEN) and microfluidic spotting (including HemaXis devices). Among the latter, solid phase microextraction (SPME) is available. VAMS makes use of a device including a plastic handle and a round, calibrated tip, made from a proprietary hydrophilic polymer [13] . The tip is put into contact with the J o u r n a l P r e -p r o o f desired biofluid to absorb a constant sample volume into its pores: 10 µL, 20 µL or 30 µL, according to tip size. After drying, the sample absorbed on the tip is ready for storage or pretreatment and analysis ( Figure 2 ). VAMS allow the microsampling of fixed fluid volumes from different matrices with high reproducibility, and without any significant HCT volume effect [14, 15] . The sampling procedure is quite straightforward and can be carried out at home by patients or other people with minimal training. Like most microsampling techniques, VAMS has been developed and validated for blood sampling, but can be applied with satisfactory results to other matrices [16] [17] [18] [19] . Suitably calibrated glass capillaries, cut to suitable lengths, can draw fixed biofluid volumes from a fingerprick [20] or other body locations, irrespective of HCT. Then, the fluid can be spotted onto suitable supports for drying or be stored as such until analysis. A recently developed capillary device (hemaPEN) contains four end-to-end EDTAcoated capillaries that simultaneously collect four identical matrix spot replicates (2.74 μL each) from a single sample. The pen-like design of the device makes it easy to handle, includes a desiccant, grants sample integrity and prevents most contaminations ( Figure 3 ). It is a single use, tamper-resistant device, needing a specific opening tool for the retrieval of the DMS [21] . J o u r n a l P r e -p r o o f Different kinds of supports, combinations thereof, and microfluidic designs have been devised to produce dried spots with constant volume, lower inhomogeneity, or derivative matrices. For example, the Noviplex spotting cards use a two-layer design to obtain one 2.5-µL plasma spot, or two 3.8-µL plasma spots, from a single blood drop. After fingerpricking, the blood drop is deposited on the upper layer, which filters out erythrocytes. The resulting plasma produces calibrated spots on the lower layer [22, 23] . HemaXis DB devices can produce fixed-volume 10-µL blood spots on common spotting cards through the use of microchannels engraved in a plastic slab device [24] ( Figure 4 ). HemaXis DX devices are currently under development, that could directly produce dried plasma spots (DPS) or dried serum spots (DSS) through passive erythrocyte sedimentation in a proprietary microchannel arrangement [25] . SPME is often considered a miniaturised sample pretreatment technique, rather than a microsampling one. SPME is based on a porous, filamentous fibre (or a solid fibre coated with a porous substance), which is put into contact with, and absorbs, the sample and its components according to their affinity toward the fibre (or coating) material [26] . The analytes can then be selectively desorbed, directly in a gas chromatographic (GC) apparatus, or through solvent extraction in a liquid chromatographic (LC) apparatus. However, the SPME fibre can also be applied directly, in situ, to a fingerprick blood drop, or to oral fluid, or to other biofluids: in this case, it can be considered a microsampling technique. SPME is thus uniquely positioned, in J o u r n a l P r e -p r o o f that it can carry out the simultaneous microsampling and pretreatment of biological fluids. On the other hand, the microfibre operation and handling is not simple, nor devoid of risks, and can be carried out only by specifically trained personnel: this automatically excludes any chance of self-and at-home-microsampling by SPME. Moreover, SPME is a kinetic equilibrium technique, so the sample volume (and the analyte amount) absorbed on the fibre is not easily assessed. Sample volumes in the microlitre range are particularly attractive for invasive sampling techniques (such as blood or cerebrospinal fluid drawing), where the minute volume amounts allow to keep invasiveness to a minimum and to obtain multiple samples within relatively short time spans with a minimum of discomfort, damage and risks for the subject. Most forms of microsampling also involve sample drying before analysis, and this further extends the benefits of the technique. In fact, fluid sample drying effectively stops, or at least greatly slows down, most chemical and enzymatic degradation processes, thus often providing extended analyte stability in comparison to fluid samples. This in turn allows extensive sample storage with low space requirements (due to the small sample size) and without stringent temperature control requirements (due to the enhanced analyte stability). These characteristics make dried microsampling increasingly suitable and attractive also for the collection and storage of non-invasive biological fluids, such as urine, sweat, oral fluid and, particularly suited to SARS-CoV-2 infections, epithelial lining fluid (ELF). Long-term sample storage is not among the most important requirements for biological specimens related to SARS-CoV-2, which on the contrary benefit form very fast turnover and high throughput, due to the need to obtain assay results as soon as possible in time-constrained conditions. However, it can be really useful for applications that could (and will) emerge after the pandemic in the strictest sense has ended, and in particular for research-related needs. For example, retrospective studies could be carried out on the microsamples collected during the acute phase of the disease; long-term disease effects or sequelae and their early markers could be studied in this way. As-yet unforeseeable uses for stored biosamples will no doubt be found in the future, and the use of dried microsamples could provide the needed information while at the same time freeing much needed financial resources and valuable storage space for other applications. Moreover, as detailed above, most microsampling procedures involve the matrix interaction with some kind of support; for this reason, the simple act of microsampling can also be considered a (miniaturised) pretreatment of sort. The resulting microsample is usually easier to purify to the desired degree and needs less complicated, less time-consuming procedures before it is ready for analysis. SPME in particular can be adapted to be a two-in-one microsampling and pretreatment procedure. The presence of a suitable support also generally aids the automation of analytical workflows [27] . DMS and most other card-based techniques can exploit existing machinery that effect the direct coupling of spotted cards to mass spectrometry (MS) detection, using direct paper spray -MS (PS-MS) interfaces [28] . Alternatively, J o u r n a l P r e -p r o o f automated flow-through extraction apparatuses can be used for coupling to chromatographic systems. The VAMS device handle has a similar shape and the same size as automatic pipette tips, so VAMS device racks can be directly inserted in, and handled by, common automatic liquid handling apparatuses [29, 30] . SPME fibres can be directly inserted into the mobile phase flow of both GC and LC systems. As a final, but not less important consideration, microsampling is uniquely positioned to become the technique of choice for at-home-and self-sampling protocols. Their simplicity, safety and independence from specially trained personnel makes most microsampling devices and procedures suitable to obtain reliable results even when carried out in less than ideal conditions. Moreover, the mild storage conditions requirements allow one to safely send microsamples to their destination with minimal precautions and using general, non-dedicated transportation means. As already explained, HCT effect is one of the most important and well-known drawbacks of DBS and similar whole blood-based microsampling techniques. Although it has been overcome to some extent with the introduction of alternative techniques (VAMS, microfluidic cards), the large dominance of DBS in the microsampling space means that this is still one of the most pressing problems. Even when the HCT effect proper is not an issue, all whole blood-based microsamples still have to deal with the fact that the sample includes erythrocytes, so that both HCT variability and analyte partitioning between erythrocytes and plasma can affect analysis repeatability. Microfluidic systems that automatically, passively separate plasma from whole blood J o u r n a l P r e -p r o o f overcome this problem, but generally at the cost of higher expenses and more complicated setups. Another pressing problem of microsampling is its lower sensitivity in comparison to classical sampling procedures. Several innovations in the miniaturised pretreatment and analysis space are progressively attenuating this problem, with ever more sensitive instrumentations and ever smaller sample amounts being introduced. For many applications (e.g., monitoring of major biofluid components), the decreased sensitivity can be scarcely relevant, but most cutting-edge science requires equally high sensitivity performances [31] . This drawback is not likely to go away soon, or ever. The low matrix amounts also mean that relative precision tends to be lower than for higher sample amounts. This disadvantage too is unlikely to disappear soon. Sample handling automation can go a long way to alleviate it, reducing in part operator errors, but also introducing the need for suitable equipment qualification and validation. J o u r n a l P r e -p r o o f The SARS-CoV-2 pandemic has triggered a healthcare, economic and social crisis that will probably affect most of the world for years to come. Due to the disease novelty, it is currently impossible to predict the end of the pandemic, as well as the possible sequelae and long-term health effects on both symptomatic and asymptomatic patients after recovery. Due to the nature of the disease, a huge number of people still need, and will need in the future, diagnostic, therapeutic and follow-up tools, measures and facilities that involve some kind of bioanalytical workflow. Microsampling could facilitate the application of these workflows, hopefully making them more practical and feasible, faster and less expensive, while hopefully keeping the results equally reliable and useful. An overview of the methodologies potentially useful for SARS-CoV-2 therapy and diagnosis is reported in Table 1 . Currently, most SARS-CoV-2 infection diagnoses are carried out either by nasopharyngeal and oropharyngeal swabs, which aim to detect viral RNA, or serologic tests for the analysis of specific antibodies [32, 33] . Swab tests are mostly used when the possible infection is presumably in the initial phase and relies on molecular biology techniques, which are carried out on a small J o u r n a l P r e -p r o o f amount of nasal or nasopharyngeal fluid, or of expectorate. In this case, a microsampling of sort is already implemented, however its application is currently semi-quantitative at best. In fact, the sample volume is relatively variable according to swab type, due to the different volume and absorbing power of different swab materials; moreover, even when using a single swab type, the absorbing material itself is usually not calibrated for the purpose of volume reproducibility. In this case, some form of DMS could be used after sample retrieval with a swab; or a VAMS procedure could be used directly, using e.g., a modified, longer VAMS handle that can reach the sampling location. However, in the case of VAMS, the very fast tip saturation would be a disadvantage, since it would tend to saturate with nasal fluid before reaching the pharynx. Higher-volume tips, different polymers, or different sampling procedures should be envisaged in this case. antibody quantification can also be carried out on urine, faeces and oral fluid. In all these cases, rapid qualitative tests will probably be highly prevalent. However, if the oral fluid antibody test is confirmed as reliable for diagnosing past SARS-CoV-2 infections, a microsampling approach could provide quantitative results that would also allow evaluating immunity or lack thereof, without resorting to invasive procedures. In this case, at-home sampling could be easily implemented, but the test itself would reasonably be laboratory-based, at least for the time being, also in order to stress method sensitivity and compensate for the minute sample volumes. Crucially, dried microsampling has demonstrated to stabilise nucleic acids (both DNA and RNA) in a very effective way for years and even decades [34] [35] [36] , with freezing temperatures (at least -20°C) greatly improving this parameter for DBS samples. Thus, all dried microsamples collected during the pandemic and afterwards could be stored in relatively small spaces and then be used to study both viral RNA and its host's nucleic acids when the need will arise. Antibodies are also known to be stable in dried matrices (namely VAMS) at RT for months, and at -20°C for years [37] . Thus, a perspective on the possible storage and future use of antibodies in dried matrices can be adopted, similar to that on nucleic acids. this possibility is quite attractive [42] . Cytokine assay in blood is associated with the usual possibility of microsampling application; until now, DBS has been the only applied microsampling technique, and mostly in new-borns or small children [43] [44] [45] . Interestingly, cytokine levels can also be assayed in sweat with relatively reliable results; in this case, microsampling by dermal patch has been applied [46] . Finally, interferon α2a has been analysed with in-tube immunoaffinity SPME of plasma [47] . Several haematic parameters related to coagulation can be useful, or even critical, to Due to the large array of different pathological effects the SARS-CoV-2 can cause, several kinds of pharmacological therapies can be applied, and the choice of the J o u r n a l P r e -p r o o f specific drug and dose would be better tailored to each individual patient. In this regards, therapeutic drug monitoring (TDM) is one of the most effective practice that allow treatment personalisation and optimisation based on objective measurements [49] [50] [51] [52] . TDM includes the repeated determination of drugs and metabolites plasma levels, together with the use of chemical-clinical correlations (i.e., correlations between administered drug dose and plasma levels; between plasma levels and therapeutic efficacy; between plasma levels and side and toxic effects) [53] [54] [55] . The information thus obtained represents a sound, rational and objective foundation, on which the clinician can base its activity using clinical observations to build a safe and effective therapeutic platform [56] . TDM can also lead to reduced healthcare expenses, due to the possibility of better efficacy, increased patient compliance and enhanced safety, leading to a reduction in hospitalizations due to unwanted effects or therapy ineffectiveness [57] . TDM is particularly useful in avoiding overdoses and their consequences, as well as in managing drug-drug interactions (DDI) [58] : Of course, this is even more important during polypharmacy (i.e. concurrent multiple-medication regimens). An effective antiviral therapy could solve most problems related to SARS-CoV-2 infections. Unfortunately, until now most antivirals seem to have produced improvements in patients' clinical conditions only when administered as part of complex therapeutic regimens including multiple drugs [59] . As one could expect, BMS for ELF microsampling has been used for the analysis of antivirals (peramivir) administered systemically [60] . However, generally speaking, apart from this single example DBS and dried plasma spots (DPS) have been the only microsampling variants to be applied [61] [62] [63] [64] . Specifically, the most frequent antiviral therapies applied for SARS-CoV-2 infections involve lopinavir (often associated with ritonavir), remdesivir, favipiravir, ribavirin and arbidol [65] . To date, papers have been published on the determination and monitoring of lopinavir, ritonavir and ribavirin [66-68,] in DBS and also in dried plasma spots (DPS) [69, 70] . A peculiar example of DPS on glass filters instead of paper ones has been reported [71] ; it is unclear which advantages and disadvantages this substrate brings to the assay. On the contrary, no microsampling technique seems to have been applied to remdesivir, favipiravir and arbidol determination. In the framework of SARS-CoV-2 treatments, this kind of therapy aims to avoid or reduce the impact of the cytokine storm, thus preventing the damage caused by the excessive immune response, using anti-inflammatory and immunosuppressant agents. Among the former, one can cite the non-steroidal anti-inflammatory drugs (NSAIDs) ibuprofen and ketoprofen and the corticosteroids methylprednisolone, dexamethasone and budesonide; among the latter, one can cite tocilizumab and tacrolimus [72, 73] . A variant of DMS has been developed, which involves the preparation of dried saliva spots (DSS) using fabric phase sorptive extraction (FPSE) [74] . This technique uses a calibrated-size patch of synthetic fabric, instead of cellulose-based paper, to absorb reproducible amounts of oral fluid. A similar workflow has also been applied to blood [75] , but using calibrated biofluid samplers (BFS) made from a cellulose substrate coated with a porous sol-gel sorbent; due to the sampler architecture, sample volume can be varied up to 1 mL, obtaining considerable advantages in term of sensitivity (balanced of course by a corresponding loss in practicality, minimal invasiveness and storage space requirements). A few SPME applications to NSAIDs have also been published over the years [76] [77] [78] , however most of them are proofs-of-concept for new kinds of absorbing materials [76, 77] and just a few were applied to real samples from patients [78] . Moreover, it deals with a SPME application as a sample pretreatment procedure on conventional fluid samples, not a combination of microsampling and miniaturised pretreatment, thus being limited as an "in-lab" sample preparation technique. The performances of three different microsampling techniques (DBS, VAMS, DPS through Noviplex cards) have been compared for the analysis of endogenous J o u r n a l P r e -p r o o f corticosteroids, although the process has been applied to rat blood, not to human samples [79] . DPS had the advantage of producing results directly comparable to those of liquid plasma, while VAMS and DBS results, and DBS sampling volume, are influenced by haematocrit. On the other hand, the larger sampling volume of VAMS (10 µL vs. 3.8 for DPS and 7.5 µL for DBS) confers higher sensitivity to this assay. In general, microsampling applications have been reported for endogenous corticosteroid determination (thus including cortisol and/or cortisone) in DBS [80] [81] [82] and in urine pretreated by SPME [83, 84] . Regarding specifically the exogenous corticosteroids most often involved in SARS-CoV-2 therapy, budesonide has been analysed in DBS [85] ; SPME procedures are also available: methylprednisolone has been directly microsampled by SPME in situ during liver surgery [86] , but similar procedures could be envisioned for respiratory tract sampling; automated thin-film SPME on plasma has been applied to the analysis of dexamethasone, budesonide and prednisolone [87] . Dexamethasone is currently being evaluated as one of the most promising agents for SARS-CoV-2 infection treatment. A microsampling procedure for its determination in dried urine spots (DUS) and urine VAMS is available [88] , which also includes other exogenous and endogenous glucocorticoids. Many immunosuppressants have a rather narrow therapeutic window and are mostly used chronically for many years. As a consequence, their monitoring is quite widespread, and several microsampling procedures are available. As usual, DBS is the J o u r n a l P r e -p r o o f most frequent microsampling approach [89] [90] [91] [92] [93] . In one study, according to the authors, heated flow-through desorption allowed to obtain HCT-independent recovery of the analytes [94] . In another study, fixed-volume DBS were obtained by means of an HemaXis device [95] . A comparison of blood VAMS and DBS for the monitoring of tocilizumab and six other therapeutic monoclonal antibodies showed that both techniques provided high analyte stability for at least 1 month at room temperature [96] . Blood VAMS has also been applied for tacrolimus quantification [97] [98] [99] , including a comparison study between VAMS and DBS [100] , which has found better agreement with whole blood tacrolimus levels for the latter than for the former. A peculiar microsampling application is the use of dried milk spots (DMKS) for the monitoring of tocilizumab in breastfeeding mothers [101] , which introduces the possibility of also monitoring infant exposure during the mother's treatment. Injective anticoagulants (mainly low molecular weight heparin, LMWH; or unfractionated heparin, HUF) are suggested as possible therapeutic interventions in SARS-CoV-2 patients who are at risk for thromboembolic events (see above), or who already were before the infection [102] . Microsampling approaches to heparin analysis are few and far between: DBS from new-borns of heparin-treated mothers have been used to search for possible heparin presence, but no positive results has been obtained [103] . Oral anticoagulants are considered too prone to drug-drug interactions and too subject J o u r n a l P r e -p r o o f to monitoring needs to be useful in a clinical setting; of course, the latter could be a good reason to study and propose new, straightforward and practical microsampling approaches that would make their TDM more feasible. Although mainly used against malaria, chloroquine and hydroxychloroquine have demonstrated efficacy against several viruses, including coronaviruses [104] . They are also used as anti-inflammatory/immunosuppressant agents in autoimmune diseases [105] , so they could provide multiple beneficial biological activities to SARS-CoV-2 patients. DBS application to chloroquine analysis started in 1985, with the first paper published on this topic by Lindstrom et al. [106] ; several other applications have been reported over the years [107] [108] [109] (as also reviewed in Taneja et al., 2013 [110] and in Casas et al., 2014 [111] ), and until recently [112] [113] [114] . Regarding hydroxychloroquine, VAMS has been used for its determination in rheumatoid arthritis patients [115] , while DBS has been used for a pharmacokinetic study in rats [116] . Lithium salts, usually administered to treat bipolar disorder, have also demonstrated some antiviral activity in preclinical studies [117] . Lithium requires constant TDM during the therapy, so it is a prime candidate for microsampling application; however, until now just DBS and DPS have been used for this purpose [118] . Cyclosporin A is well known immunosuppressant agent that also has activity against coronaviruses [119] , so it could benefit SARS-CoV-2 patients with two different mechanisms. Until now, DBS [120] and VAMS [98] have been the only two microsampling technique applied to cyclosporin A. Camostat is a recent protease inhibitor agent that could be useful to prevent SARS-CoV-2 virus' entry into the cell, effectively preventing the infection. Until now, no microsampling procedure for camostat analysis has ever been published. J o u r n a l P r e -p r o o f The SARS-CoV-2 pandemic has taken the whole world aback, and scientists and clinicians alike are currently struggling to find new diagnostic and therapeutic tools that could help patients to successfully cope with this multi-faceted and polymorphous disease. Within this landscape, microsampling could prove to be uniquely positioned to provide reliable quantitative information in short times and with high throughput, when coupled to both chemical and biochemical analytical tools. Moreover, dried microsampling could prove an invaluable asset in cheaply and practically preserving biological specimens for future use. Until now, simple DBS on common cards has been by far the microsampling technique of election for most bioanalytical applications; however, its drawbacks have spurred the birth of a wealth of modified or alternative procedures that are now reaching maturity, including DMS, VAMS and microfluidic and capillary matrix spotting. Taken together, all these microsampling could prove to be even more useful, reliable and customizable than DBS itself. The corresponding applications potentially useful for SARS-CoV-2 therapy are surmised in Table 1 . The future hopefully holds aetiological, resolutive SARS-CoV-2 therapies that could be personalised to each patient's peculiar needs and individualised responses. Microsampling could be a decisive factor in accelerating the coming of this future and in making it widely applicable, with reduced costs and increased effectiveness. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. J o u r n a l P r e -p r o o f
SARS-CoV-2 is the etiologic agent of COVID-19, which in 2020 has caused over 180,000 deaths in the United States. 1 Concerns related to the risk of SARS-CoV-2 exposure within healthcare environments have caused patients to avoid seeking care. In Italy, for example, rates of hospital admission for acute coronary syndrome were one-quarter lower during the COVID-19 outbreak compared to rates earlier in 2020 or in the same period during 2019 2 ; in the United States, at the beginning of the pandemic, emergency room visits declined 42% compared to the same period in the year prior. 3 As the US moves into the next phase of the pandemic, patients may continue to exercise discretion with respect to seeking needed and elective medical care, due to concerns about transmission of the virus within healthcare settings. Such decisions have considerable implications for outpatient and inpatient medical and surgical care 4, 5 and for the financial viability of health systems continuing to provide health care services. Thus, we sought to better understand the risk of healthcare-associated SARS-CoV-2 acquisition in the context of current infection control practices. The University of Washington (UW) medical system initiated universal testing of all admitted surgical patients on March 30 th , 2020, and all admitted medical patients on April 13 th , 2020 and serves as a regional referral laboratory for SARS-CoV-2 tests. These factors enable measurement of incident healthcare-associated SARS-CoV-2 acquisition among a cohort of patients known to be test-negative at the time of admission to a healthcare environment. UW Medicine is a three-hospital academic health system located in Seattle, Washington. Results of all SARS-CoV-2 tests performed on admission to UW Medicine hospitals from April 2nd, 2020 through May 14th, 2020 were analyzed. During this period, universal testing of patients entering these hospitals was required: a) within 72 hours prior to planned surgical procedures and b) beginning April 13th, at the time of all other inpatient admissions. All screening tests were collected using nasopharyngeal swabs and analyzed by reverse Repeat test results were evaluated through an observation period extending 14 days beyond discharge to account for healthcare-associated infections which may have occurred just prior to discharge. (Patients who may have been admitted subsequently to their initial admission, but after the post-discharge 14-day monitoring period, only contributed exposure time during the initial admission). Test results were extracted for analysis on May 28th to allow a full 14day observation period for all included patients. Potential healthcare-associated COVID-19 infections detected using this approach were cross-checked with an institutional database of clinical reviews maintained by the UW Infection Prevention and Control program. The frequency of short-term (within 7 days) SARS-CoV-2 NP test discordance among initially test-negative patients in the UW system during a similar period has been estimated at 4.1% and was similar to one other large academic medical system. 7 All patients with potential newly positive tests were subject to structured chart review. Final determinations (healthcare-associated vs. non-healthcare- Initial screening test results from 3053 patients entering the health system during the study period were reviewed. Those with a documented prior positive result (n=33) were excluded. Among the 2992 asymptomatic individuals with negative screening tests at the time of entrance into the health system, average length of inpatient stay was 6.1 days (interquartile range 6), representing a range of 11,971 to 11,981 patient-days at risk within an inpatient environment, dependent on results of consensus classification of healthcare-associated infection status. Of these 2,992 patients, 28.1% were retested one or more times during the observation period (12.4% during hospitalization, 11.9% within 14 days of discharge, and 3.9% both). Repeat testing in this initially negative group was most often performed for A c c e p t e d M a n u s c r i p t 6 ongoing procedural or discharge surveillance (90%), but occasionally due to new onset of symptoms concerning for COVID-19 (10%). During the study period, 8 cases of possible incident SARS-CoV-2 positivity were observed among patients testing negative at the time of admission. After consensus review, 2 patients were classified as 'Definitely not HAI'; 3 patients were classified as 'Likely not HAI'; 2 patients were classified as 'Possibly HAI'; 1 patient was classified as 'Likely HAI'; and 0 patients were classified as 'Definitely HAI'. Accounting for these cases, during the study period, there was a range from 1 to 6 potential cases among this study population, indicating a range of 0.8 to 5.0 cases per 10,000 patient-days. Of note, none of these cases were related and no outbreak/cluster of COVID-19 was suspected among patients within UW Medicine hospitals during the period under investigation. In this work, it was observed that the incidence of hospital-acquired SARS-CoV-2 infection within a single, large health system during a period of universal admission testing was relatively low. Other reports have found annualized hospital-associated respiratory viral infection rates to be approximately 4.9 (95%, 4.7-5.2) cases per 10,000 patient-days, 9 consistent with the upper range of our estimate for SARS-CoV-2. As health systems and public health authorities communicate the need to avoid foregoing necessary clinical care, transparent enumeration of the risks of SARS-CoV-2 transmission within healthcare settings will be essential. Such communication is important as patients delay or avoid seeking care for several time-sensitive indications, including childhood vaccination 3 , acute coronary syndrome 2 and stroke. 4 There are limitations to this study. These results represent the experience of 3 hospitals of one major academic medical system; as infection control practices vary widely, these results may not be generalizable to other health systems. Approximately 1 in 4 patients were A c c e p t e d M a n u s c r i p t 7 retested following their negative admission RT-PCR result; among retested patients not undergoing mandated surveillance for administrative indications (i.e., prior to facility transfer or before a procedure), the chance of subsequent testing may have favored patients with concern for SARS-CoV-2, which could bias these results toward a higher healthcare- Table 1 ) and utilized contextual data on institutional rates of short-term nasopharyngeal test discordance 7 (i.e. testing negative initially then testing positive shortly thereafter) to interpret such cases. Finally, the period under investigation occurred after the cases in King County, Washington surpassed their peak and the overall census of UW Medicine inpatients with COVID-19 had begun to decline. It is possible that the risk of healthcare-associated COVID-19 infection was higher during this earlier period when overall disease prevalence was increasing and prior to the standardization of current infection control procedures. A c c e p t e d M a n u s c r i p t 8 Ongoing evaluation of hospital-acquired transmission rates is critical to ensuring patient and staff safety, earning patient trust, and identifying and addressing any risk factors for transmission as they emerge. As health systems and patients adapt to the ongoing US COVID-19 crisis, patients will continue to seek information regarding the risks of presenting for necessary medical and surgical care in this new environment. These data indicate that, in health systems with comparable infection control practices, the risk of healthcare associated SARS-CoV-2 transmission may be relatively low. A c c e p t e d M a n u s c r i p t
In December 2019, many unexplained pneumonia cases occurred in Wuhan, China, and has rapidly spread to other parts of China, then to Europe, North America and Asia. This outbreak was confirmed to be caused by a novel coronavirus (2019 novel coronavirus, 2019-nCoV) [1] . 2019-nCov was reported to have symptoms resembled that of severe acute respiratory syndrome coronavirus (SARS-CoV) in 2003 [2] . Both shared the same receptor, angiotensin-converting enzyme 2 (ACE2) [3] . Therefore, this virus was named SARS-CoV-2, and recently WHO named it coronavirus disease 2019 (COVID-19). Until February 21 th 2020, there were 75569 confirmed cases of COVID-19 and 2239 deaths in China [4] . Coronaviruses can cause multiple systemic infections or injuries in various animals [5] . However, some of them can adapt fast and cross the species barrier, such as in the cases of SARS-CoV and Middle East respiratory syndrome-CoV (MERS-CoV), causing epidemics or pandemics. Infection in human often leads to severe clinical symptoms and high mortality [6] . As for COVID-19, several studies have described clinical manifestations including respiratory symptoms, myalgia and fatigue. COVID-19 also has characteristic laboratory findings and lung CT abnormalities [7] . However, it has not been reported that patients with COVID-19 had any neurological manifestations. Here, we would like to report the characteristic neurological manifestation of SARS-CoV-2 infection in 78 of 214 patients with laboratory-confirmed diagnosis of COVID-19 and treated at our hospitals, which are located in the epicenter of Wuhan. This was a retrospective study. Data was reviewed on all patients with from January 16 to February 19, 2020 at three designated COVID-19 care hospitals of All rights reserved. No reuse allowed without permission. the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint (which was not peer-reviewed) is . https://doi.org/10.1101/2020.02.22.20026500 doi: medRxiv preprint Union Hospital of Huazhong University of Science and Technology. All patients with COVID-19 enrolled in this study were diagnosed according to the WHO interim guideline [8] . Only those cases confirmed by a positive result to real-time reversetranscriptase polymerase-chain-reaction (RT-PCR) assay from throat swab specimens were included in the analysis [9] . Union Hospital, located in the endemic areas of COVID-19 in Wuhan, Hubei Province, is one of the major tertiary healthcare system and teaching hospitals responsible for the treatments for SARS-CoV-2 infection as designated by the government. The study was performed in accordance to the principles of the Declaration of Helsinki and was approved by the Research Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China. Verbal consent was obtained from patients before the enrollment. The demographic characteristics, medical history, symptoms, clinical signs, laboratory findings, chest computed tomographic (CT) scan findings were extracted from electronic medical records. The data were reviewed by a trained team of physicians. Neurological symptoms were categorized into three main areas: central nervous system (CNS) symptoms or disease, peripheral nervous system (PNS) symptoms and muscular symptoms. Acute cerebrovascular disease included ischemic stroke and cerebral hemorrhage diagnosed by head CT. Muscle injury was defined when a patient had myalgia and elevated serum creatine kinase level above 200 U/L [7] . All neurological symptoms were reviewed and confirmed by two trained neurologists. The date of disease onset was defined as the day when the symptom was noticed. The severity of COVID-19 was defined by the international guidelines for community-acquired pneumonia [10] . All rights reserved. No reuse allowed without permission. the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint (which was not peer-reviewed) is . https://doi.org/10.1101/2020.02.22.20026500 doi: medRxiv preprint Throat swab samples were collected and placed into a collection tube containing preservation solution for the virus [9] . SARS-CoV-2 was confirmed by real-time RT-PCR assay using a SARS-CoV-2 nucleic acid detection kit according to the manufacturer's protocol (Shanghai bio-germ Medical Technology Co Ltd). Continuous variables were described as means and standard deviations, or medians and interquartile range (IQR) values. Categorical variables were expressed as counts and percentages. Continuous variables were compared by using the unpaired Wilcox rank-sum test. Proportions for categorical variables were compared using the χ2 test. All statistical analyses were performed using R (version 3.3.0) software. The significance threshold was set at a P<0.05. A total of 214 hospitalized patients with confirmed SARS-CoV-2 infection were included in the present analysis. Their demographic and clinical characteristics were shown in Table 1 the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint (which was not peer-reviewed) is Table 2 showed the laboratory findings in severe and non-severe subgroups. Severe patients had more increased inflammatory response, including higher white blood cell, neutrophil counts, lower lymphocyte counts and more increased C-reaction protein levels compared with those in non-severe patients (white blood cell: median, 5.4 was indicative of consumptive coagulation system. In addition, severe patients had multiple organ involvement, such as serious liver (increased lactate dehydrogenase, All rights reserved. No reuse allowed without permission. the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint (which was not peer-reviewed) is . https://doi.org/10.1101/2020.02.22.20026500 doi: medRxiv preprint alanine aminotransferase and aspartate aminotransferase levels), kidney (increased blood urea nitrogen and creatinine levels) and muscle damage (increased creatinine kinase levels). Table 3 showed the laboratory findings of patients with and without CNS symptoms. We found that patients with CNS symptoms had lower lymphocyte, platelet counts and higher blood urea nitrogen levels compared with those without CNS symptoms Table 4 showed the laboratory findings of patients with and without PNS symptoms. We found that there were no significant differences in laboratory findings of patients with PNS and those without PNS. Similar results were also found in the severe subgroup and non-severe subgroup, respectively. Table 5 showed the laboratory findings of patients with and without muscle injury. Compared with the patients without muscle injury, patients with muscle injury had All rights reserved. No reuse allowed without permission. the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint (which was not peer-reviewed) is For the severe subgroup, patients with muscle injury had increased inflammatory response (decreased lymphocyte counts and increased C-reactive protein levels), and more serious liver (increased lactate dehydrogenase, alanine aminotransferase and aspartate aminotransferase levels), kidney (increased creatinine levels) and muscle damage (increased creatinine kinase levels). For non-severe subgroup, patients with muscle injury only had higher C-reactive protein and creatinine kinase levels compared with those without muscle injury. This is the first report on detailed neurologic manifestations of the hospitalized patients with COVID-19. As of February 19, 2020, of 214 patients included in this study, 88 (41.1%) were severe and 126 (58.9%) were non-severe. Of these, 78 (36.4%) had various neurologic manifestations involved CNS, PNS and skeletal muscles. Compared with non-severe patients, severe patients were older and had more hypertension but less with typical symptoms such as fever and cough. Severe patients were more likely to develop neurological symptoms, especially acute cerebrovascular All rights reserved. No reuse allowed without permission. the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint (which was not peer-reviewed) is . https://doi.org/10.1101/2020.02.22.20026500 doi: medRxiv preprint disease, conscious disturbance and muscle injury. Therefore, for patients with COVID-19, we need to pay close attention to their neurologic manifestations, especially for those with severe infectons, which may have contributed to their demise. Moreover, during the epidemic period of COVID-19, when seeing patients with these neurologic manifestations, doctors should consider SARS-CoV-2 infection as a differential diagnosis so to avoid delayed diagnosis or misdiagnosis and prevention of transmission. Recently, ACE2 is identified as the functional receptor for SARS-CoV-2 [3] , which is present in multiple human organs, including nervous system and skeletal muscle [11] . The expression and distribution of ACE2 remind us that the SARS-CoV-2 may cause some neurological symptoms through direct or indirect mechanisms. the cerebrospinal fluid of those patients and also in their brain tissue on autopsy [12] [13] . CNS symptoms were the main form of neurological injury in patients with COVID-19 in this study. The pathological mechanism may be from the CNS invasion of SARS-CoV-2, similar to SARS and MERS virus. Like other respiratory viruses, SARS-COV-2 may enter the CNS through the hematogenous or retrograde neuronal route. The latter can be supported by the fact that some patients in this study had hyposmia. We also found that the lymphocyte counts were lower for patients with CNS symptoms than without CNS symptoms. This phenomenon may be indicative of the immunosuppression in COVID-19 patients with CNS symptoms, especially in the severe subgroup. Moreover, we found severe patients had higher D-dimer levels than All rights reserved. No reuse allowed without permission. the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint (which was not peer-reviewed) is . https://doi.org/10.1101/2020.02.22.20026500 doi: medRxiv preprint that of non-severe patients. This may be the reason why severe patients are more likely to develop cerebrovascular disease. Consistent with the previous studies [7] muscle symptom was also common in our study. We speculate that the symptom was due to skeletal muscle injury, as confirmed by elevated creatine kinase levels. We found that patients with muscle symptoms had higher creatine kinase and lactate dehydrogenase levels than those without muscle symptoms. Furthermore, creatine kinase and lactate dehydrogenase levels in severe patients were much higher than those of none-severe patients. This injure could be related to ACE2 in skeletal muscle [14] . However, SARS-CoV, using the same receptor, was not detected in skeletal muscle by post-mortem examination [15] . Therefore, whether SARS-CoV-2 infects skeletal muscle cells by binding with ACE2 requires to be further studied. One other reason was the infection-mediated harmful immune response that caused the nervous system abnormalities. Significantly In conclusion, SARS-CoV-2 may infect nervous system, skeletal muscle as well as respiratory tract. In those with severe infection, neurological involvement is more All rights reserved. No reuse allowed without permission. the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint (which was not peer-reviewed) is . https://doi.org/10.1101/2020.02.22.20026500 doi: medRxiv preprint likely, which includes acute cerebrovascular diseases, conscious disturbance and skeletal muscle injury. Involvement of the nervous system carries a poor prognosis. Their clinical conditions may worsen and patients may die soon. Therefore, for patient with COVID19, physicians should pay close attention to any neurologic manifestations in addition to the symptoms of respiratory system. the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint (which was not peer-reviewed) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint (which was not peer-reviewed) is . https://doi.org /10.1101 /10. /2020 Data are presented as means ± standard deviations and n/N (%). Abbreviations: CNS, central nervous symptoms; PNS, peripheral nerves symptoms. P values indicate differences between severe and non-severe patients. P<0.05 was considered statistically significant. All rights reserved. No reuse allowed without permission. the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint (which was not peer-reviewed) is . https://doi.org/10.1101/2020.02.22.20026500 doi: medRxiv preprint the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint (which was not peer-reviewed) is P values indicate differences between severe and non-severe patients. P<0.05 was considered statistcally significant. All rights reserved. No reuse allowed without permission. the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint (which was not peer-reviewed) is P values indicate differences between patients with and without CNS. P < 0.05 was considered statistically significant P values indicate differences between patients with and without PNS. P<0.05 was considered statistically significant.
According to epidemiological surveillance of the disastrous COVID-19 pandemic, the causing agent severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) virus harbours mutations and associated with geographical-specific etiological effects (Brufsky, 2020; Mercatelli and Giorgi, 2020) . Currently, three major variants of SARS-CoV-2 have been identified namely D614G in the spike protein, G251V in the non-structural protein 3 (NS3) and L84S in the ORF8 protein (Forster et al., 2020) . In this article, we focused on the spike protein with D614G substitution. Spike protein of SARS-CoV-2 is a 1273aa long transmembrane glycoprotein and comprises of three modules namely a large ectodomain that protrudes from the surface, a single-pass transmembrane anchor and a short intracellular tail. The ectodomain has S1 and S2 regions responsible for host cell binding and viral-host membrane fusion, respectively. At the junction of S1 and S2 regions, S1/2 cleavage site is present and S2' cleavage site is located in the S2 region. Depending on the orientation of a receptor binding domain (RBD) in the S1 region, protomers in the functional form of spike protein trimer adopts 'open' or 'closed' conformation (Cai et al., 2020; Walls et al., 2020; Wrapp et al., 2020) . Upon open conformation, RBD exposes ACE2 receptor binding regions and interacts with a peptidase domain of angiotensin-converting enzyme 2 (ACE2) receptor. This primary step clasps the virus on to the host surface (Lan et al., 2020; Yan et al., 2020) . Studies on SARS-CoV have shown that subsequent proteolysis at S1/S2 cleavage site sheds S1 region from the spike protein and cleavage at S2' site near fusion peptide causes a large conformational change in the S2 region. Such conformation change leads to an insertion of fusion peptide to host membrane and the formation of six-helix bundle. At this state, spike protein bridges viral envelope and host membrane. Hairpin-like bend in S2 region brings both membranes to close proximity for fusion following which genetic material gets released into the cytoplasm of the human cell (Belouzard et al., 2012) . It is also noted that due to multi-basic nature of S1/2 cleavage site, the SARS-CoV-2 spike protein can be preactivated by furin enzyme during viral packaging (Shang et al., 2020) . In contrast to SARS-CoV infection, this process reduces SARS-CoV-2 dependence on target cell proteases for the succeeding infection. Therefore, mutations in the spike protein that influence the initial step for viral infection are associated with altered virus transmissibility and pathogenicity (Brufsky, 2020; Li et al., 2020) . The ancestral spike protein with aspartate at 614 th position (S D614 ) has been 3 asynchronously superseded by glycine substitution (S G614 ) world-wide. The dominant S G614 variant is shown to have higher infectivity than the S D614 variant (Korber et al., 2020) . Concomitantly, other studies report that glycine substitution disrupts a salt bridge interaction between aspartate at 614 th position and lysine at 854 th position of an adjacent protomer and may contribute to higher frequency of open conformation than the S D614 variant (Cai et al., 2020; Yurkovetskiy et al., 2020; Zhang et al., 2020a) . A recent cryo-EM study further reveals that the glycine substitution prevents premature shedding of S1 region (Zhang et al., 2020b) . Along the similar line, our calculation of local interaction energies and free energy difference between aspartate and glycine variants of the spike protein, reported in this paper, suggests that glycine creates energetically favourable local environment. As a result, it strengthens the association of S1 and S2 regions of the same as well as adjacent protomer(s) and enhances overall stability of the spike protein trimer. We generated an in silico model for D614G variant of spike protein trimer using structure editing tool in UCSF chimera with default parameters (Pettersen et al., 2004) . Side chains were optimized using SCWRL 4.0 program (Krivov et al., 2009) . Two D614G variant models were generated corresponding to closed and partially open conformations of the spike protein trimer based on the reference cryo-EM structures available in the protein data bank (or PDB) entries 6VXX and 6VYB, respectively. It must be noted that although D614G variant structure is available (PDB code: 6XS6), we have not considered it in this analysis due to the absence of RBD domain in the solved structure. Th effect of D614G substitution on local interaction energies was examined using Frustratometer algorithm (Parra et al., 2016) . The underlying principle of the algorithm is that a native protein comprises of several conflicting contacts leading to local frustration. To examine frustrated contacts present in a protein, the algorithm systematically substitutes the residue type or alters chemical configuration of each interacting pair (including watermediated interactions) and generates approximately 1000 structural decoys for a given 4 contact (elaborated in Ferreiro et al. 2007 ). The extent of changes in the total interaction energy between native and structural decoys according to associative memory, water mediated, structure and energy model (AWSEM and implemented as a molecular dynamics algorithm, AWSEM-MD) decides whether the frustration of a concerned contact is minimal, neutral, or high. When the native energy is in the lower end of energy distribution of structural decoys, the contact is stabilizing and minimally frustrated (favourable) whereas when the native energy falls in between the energy distribution of structural decoys indicates the contact is neutrally frustrated. Native energy at higher end of the energy distribution of structural decoy indicates the contact is destabilizing and highly frustrated (unfavourable). Often, highly frustrated contacts signify functional constraints such as substrate binding, allosteric transitions, binding interfaces and conformational dynamics (Ferreiro et al., 2007 (Ferreiro et al., , 2014 . Frustration of the contact is represented as frustration index, a Z-score of interaction energy of native contact with respect to the interaction energy distribution of structural decoys generated for that specific contact. Frustration index below -1 indicates the interacting pair is highly frustrated while the index between -1 to 0.78 or above 0.78 indicates the interacting pair is neutrally or minimally frustrated, respectively. Depending upon the nature of perturbation, frustration is referred as 'mutational', 'configurational' or 'single-residue level'. In the mutational frustration, residue type is replaced by other residue types while in the configurational frustration, all possible interaction types between the native residue pairs were sampled through altering residue configuration. In case of single-residue level frustration, only a single residue is considered. The structural decoy set comprises of randomized residue type at that specific site and frustration index is calculated by evaluating changes in the protein energy upon altering the type of residue. In these three categories of frustration indices, only the concerned site/interaction is altered and the rest of the structure is maintained as native. In this study, we analyzed all categories of frustration indices for two variants of spike protein (S D614 and S G614 ) in the functional trimeric form. To study the effect of D614G variation on the thermodynamic stability of the spike protein trimer, we calculated free energy changes upon aspartate to glycine substitution using buildmodel function in FoldX (Schymkowitz et al., 2005) . Five iterations of free energy 5 calculations were carried out to obtain converged results (Tokuriki et al., 2007) . Inferences of the results were derived from closed and partially open conformations of the spike protein trimer. As amino acid substitution alters local chemical environment, we probed D614G effect on the energetics of local inter-residue interactions. This can be quantified as local frustration of a residue or inter-residue interactions. We calculated frustration index of residues and interresidue interactions for two variants of spike protein viz. aspartate or glycine at the 614 th position. Result shows that frustration index of aspartate in the spike protein (S D614 ) is -1.25, -1.25 and -1.30 for three protomers in the closed conformation (red lines in Figure 1A ). The frustration index of aspartate in the partially open conformation is -1.24, -1.31 and -1.28 for three protomers (red lines in Figure 1B) . Hence, in both the conformations aspartate is highly frustrated. Conversely, in glycine variant (S G614 ), the residue is neutrally frustrated with frustration index of -0.48, -0.42 and -0.46 for protomers in the closed conformation and -0.50, -0.35 and -0.37 for protomers in the partially open conformation (blue lines in Figure 1 ). This result implies that residue frustration at the 614 th position has become neutral upon glycine substitution. In the spike protein of both conformations (S D614 ), aspartate is involved in intra-protomer contact (with residues Ser591, Gly593 and Gly594) as well as in inter-protomer contact (with Asn616, Arg646, Ser735, Thr859 and Pro862) through direct, long-range electrostatic or water-mediated interactions. Mutational frustration index indicates that all the 8 contacts are highly frustrated (Figure 2, top panel) . However, in the closed conformation of S G614 variant, glycine interacts with Phe318, Leu611 and Cys649 of the same protomer and Pro862 of the adjacent protomer. Except inter-protomer contact through Pro862, all three intra-protomer contacts are minimally frustrated (Figure 2, top left panel) . Likewise, in the partially open conformation of S G614 variant, glycine has the same contact pattern as observed in the closed conformation along with one additional contact to Val860. Of these five contacts, three are minimally frustrated and two are highly frustrated (Figure 2, top right panel) . Overall, the number of contacts as well as the number of highly frustrated contacts are reduced upon aspartate to glycine substitution. Notably, glycine forms a greater number of minimally 6 frustrated contacts indicating it creates more favourable environment around the 614 th position compared to aspartate. Next, we calculated configurational frustration index that indicates how favourable the native contact between two residues relative to other possible contacts those two residues can have. Results show that in the closed conformation, aspartate (S D614 ) has one minimally frustrated contact with Arg646 (Figure 2, left bottom panel) . Whereas, glycine (S G614 ) has six minimally frustrated contacts with residues Ser591, Gly593, Asn616, Thr645 and Arg646 of the same protomer and Thr859 of a preceding protomer in the clock-wise direction (Figure 2 , left bottom panel). Similar trend is seen for partially open conformation in which aspartate (S D614 ) has a highly frustrated contact with Gly593 while glycine (S G614 ) has the same contacts but minimally frustrated besides three minimally frustrated contacts with other residues (Thr645, Arg646 and Thr859) (Figure 2 , right bottom panel). These observations are common among three protomers present in the spike protein trimer (Supplementary Table S1 ). Hence, glycine has more favourable contacts than aspartate. Overall, calculations of single residue, mutational and configurational frustrations reveal that glycine substitution modified local interaction energy in the favourable direction. If the reduction of frustration in the local interaction energies is significant upon aspartate to glycine substitution, it can have an influence on the thermodynamic stability of the spike protein trimer. To examine this, we calculated difference in the total free energy of trimer between S D614 and S G614 variants using FoldX package (Schymkowitz et al., 2005) . Results show that the free energy difference (ΔΔG) is -2.6 kcal/mol for the closed conformation and -2.0 kcal/mol for the partially open conformation. It suggests that the stabilizing effect of glycine substitution in the local environment markedly increases the overall stability of spike protein trimer. Together, these results imply that the enhanced stability of S G614 may confer increased availability of functional form of spike protein trimer and consequent in higher infectivity compared to the S D614 as observed in the recent experimental studies (Korber et al., 2020; Zhang et al., 2020a Zhang et al., , 2020b . The increasing severity in public health and economic crisis builds urgency to develop therapeutic intervention against COVID-19 infection at the earliest. Currently, the dominance of D614G variant of SARS-CoV-2 spike protein that is being intensively studied across the globe for COVID-19 prophylaxis and treatment invites special attention. In this study, we demonstrate using in-silico approaches that glycine substitution at 614 th position changes the local environment from energetically frustrated into favourable for contacts present within as well as between protomer(s). Consequently, the free energy of S G614 is lower than that of S D614 and hence local changes in the interaction energies at the 614 th position in each protomer have a significant effect on the overall thermodynamic stability of the spike protein trimer. This finding bestows to our knowledge on the mechanism of increased transmissibility of S G614 . Table S1 ). Supplementary Table S1. The table contains details of frustration index of inter-residue contacts present at 614th position of spike protein trimer in closed and partially open conformations. Table S1A in Sheet 1 provides mutational frustration index of contacts present at the 614th position in the ancestral (D614) and dominant (G614) variants of the spike protein trimer. In table S1B at sheet 2, configurational frustration index of those contacts in the ancestral (D614) and dominant (G614) variants has been provided. Frustration state namely minimally, neutral or highly represents that the contact is energetically favourable, neutral or unfavourable, respectively.
There is growing evidence that Black, Asian and other minority ethnic (BAME) people living in Europe are at increased risk of infection with SARS-CoV-2 and, if infected, are more likely to have severe disease. 1 In the United Kingdom, the Intensive Care National Audit and Research Centre first raised concerns that BAME people were over-represented amongst Covid-19 patients admitted to intensive care. 2 These findings were reported widely in the media and discussed in opinion pieces. [3] [4] [5] [6] [7] In Wales, the First Minister established an advisory group to examine the issue and provide recommendations to reduce ethnic inequality in Covid-19 outcomes. 8 Investigating ethnic health inequalities is hampered by poor recording of ethnicity in clinical data. This is the case for Covid-19 notifications and laboratory reports in Wales. In order to rapidly investigate ethnic variation in Covid-19 epidemiology, we applied Onomap, a name-based ethnicity classification tool developed by the Department of Geography at University College London, 9 to routinely collected, named Covid-19 laboratory test data, held by Public Health Wales Communicable Disease Surveillance Centre. We used individual person data on: (1) (3) Of 10,524 people tested positive for SARS-CoV-2 in Wales to 3 May 2020, Onomap classified 9,833 in White ethnic groups and 580 in BAME groups. Proportions with positive test results were similar for both groups: 336 per 100,000 of the White group tested positive and 316 per 100,000 in the BAME All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 26, 2020. . group. Trends in those tested positive should be interpreted with caution as they most likely reflect testing policy as well as incidence. Of all those testing positive, a smaller proportion (18.1%) of those tested in the BAME group attended hospital compared to the White group (33.4%: see Table 2 ). However, the trend was reversed in people aged 50 to 59 years: 26.4% of positive BAME individuals aged 50-59 years attended hospital, compared to 19.0% of White individuals testing positive. The median age of hospitalised BAME individuals was 51 years compared to 75 years for White individuals (p<0.01; Mann Whitney 2 sample test). Of those attending hospital, a much higher proportion (20.0%) of BAME individuals were admitted to intensive care compared to White individuals (7.7%). Proportions of hospitalised patients admitted to intensive care (ICU) were highest amongst the 'Asian and British Asian -Indian, Pakistani and Bangladeshi' (27.9%) and 'White -other' (25.3%) groups. The median age of BAME patients admitted to ICU was 51 years compared to 58 years for White individuals (p=0.02; Mann Whitney 2 sample test). Amongst hospitalised patients aged between 50-59 years, 25% of BAME patients were admitted to ICU compared to 19.5% of White patients. More patients died in hospital without being admitted to ICU. Of all those attending hospital, 9.5% of patients identified as BAME died compared to 32.3% of White patients ( Table 2) . We successfully linked all records of 3,394 people hospitalised with Covid-19, those admitted to ICU, and those who died in hospital, all as at 3 May 2020, using NHS numbers. Intensive care was more likely in hospitalised males (aOR: 1.92, 95% CI: 1.46-2.53) and in younger patients (Table 3, Figure 1 ). When specific ethnicities were examined, being admitted to ICU was more likely in All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Figure 1 ). There was a strong association between increasing age and death from Covid-19 which remained after adjusting for gender and ethnicity (aOR for aged 70 years and over: 11.77 (95% CI: 7.62-18.18). However, there was no evidence from this study that BAME groups were more likely to die from Covid-19 than White-British or Irish groups, even after adjusting for gender and age (Table 3) . To investigate further, we compared the differences in the distribution of previously reported risk factors for fatal outcome 11 in White and BAME groups who had died. BAME people who died in Wales with Covid-19 were younger than White people who died (BAME median age 74 compared to 80 for White people; p=0.06, Mann-Whitney 2 sample test). Underlying chronic disease was recorded for 48% of deaths. For those that had a medical history recorded, nearly all had an underlying chronic condition that would put them at increased risk of serious Covid-19 symptoms, and there was no difference between White and BAME groups. This was a rapid initial analysis of existing surveillance data using name-based ethnicity classification software. It adds to the increasing evidence of variation in Covid-19 outcomes in ethnic minorities in Europe. The finding that certain minority ethnic groups are at higher risk of being admitted to intensive care but are no more likely to die than the White British and Irish group was also found in the recent CO-CIN cohort study involving 23,577 Covid-19 patients attending hospitals in the UK. 14 Onomap has been used widely as a tool in public health, for example in studies investigating variation in influenza mortality, 15 hepatitis B infection 16 and HPV vaccination uptake. 17 However, Onomap has limitations, and all findings should be interpreted in light of these. We previously validated the tool using data containing self-reported or healthcare professional-reported ethnicity All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. There is an urgent need for all European countries carrying out Covid-19 surveillance to report trends by ethnicity, in order to inform local infection prevention and control policy and practice. Ethnic variation should also be considered in the design of interventions, and in crisis communication. In Wales, an occupational risk assessment tool has been developed with the aim of reducing risk of infection in those most vulnerable to severe infection. 19 This tool, developed initially for the health care sector, is for all ethnicities, but includes a weighting to account for the emerging evidence of increased risk in BAME individuals. All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Paul Longley is Director of Publicprofiler Ltd. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.22.20136036 doi: medRxiv preprint Table 1 . All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 26, 2020. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 26, 2020. All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 26, 2020. . https://doi.org/10.1101/2020.06.22.20136036 doi: medRxiv preprint
Zoonoses are infectious diseasescaused by bacteria, viruses, fungi, parasites or other pathogenic agentsthat spread from animals to humans. Most human recurrent and emerging infectious diseases are zoonotic (Jones et al., 2008) , and their origin can often be traced to specific wildlife reservoirs . Emerging zoonoses have an enormous impact on global human health and are a significant burden on national economies, especially in the developing world (Karesh et al., 2012) . These impacts are particularly catastrophic when novel outbreaks spread worldwide through human-tohuman transmission, such as the COVID-19 pandemic, and can lead to long-lasting consequences to the world's biodiversity and to conservation activities (Corlett et al., 2020; Evans et al., 2020) . Communicating the health risks posed by zoonoses is paramount to protecting human populations and mitigating the spread of the disease (Decker et al., 2012; Quinn et al., 2014 ). Yet, information (and misinformation) about zoonoses and their suspected animal hosts can potentially impact on the public's perception about a given taxa (Davis et al., 2017) . Ongoing news coverage, for example, repeatedly linking wildlife to a particular zoonotic disease, can fuel animosity towards a given species (or set of species), and in extreme cases, erode societal support for conservation or even fuel direct persecution of known or suspected disease reservoirs (Buttke et al., 2015; Guyton and Brook, 2016) . In this context, even well-intentioned efforts by journalists, researchers, and conservationists to counteract dangerous negative associations between wildlife and zoonoses can lead to unintended consequences and further reinforce negative stereotypes (Decker et al., 2012; Buttke et al 2015; Lu et al., 2016) . This insidious outcome is particularly problematic amid the current pandemic, due to the tremendous societal and economic impacts COVID-19 is having at a global scale, combined with an overabundance of media coverage associating wildlife, and in particular bats, with the disease. Moreover, much of the ongoing media framing has been poorly crafted and inadequately contextualized, which may have inadvertently amplified public risk perceptions about bat-associated diseases, beyond the real proportionate risks. Such perceptions have also likely been amplified through social media, with potential counterproductive effects on public support for bat conservation. J o u r n a l P r e -p r o o f Insights from human psychology can be used to carefully design messages that result in better outcomes for public health and conservation (Davis et al 2017; Lu et al., 2016) . Although several previous authors have already discussed some of the pitfalls and challenges associated with message framing of wildlife-disease associations (e.g. Decker et al. 2011; 2012; Buttke et al., 2015) , up-to-date guidance on how to communicate about zoonoses without dampening support for conservation is currently lacking. Using COVID-19 and bat conservation as a case in point, we build on previous research and outline how psychological science can be used to address some of the complexities associated with conservation communications in the context of emerging zoonoses. Since first being recorded in late-2019 in China, COVID-19 has spread to more than 200 countries and territories, causing over a quarter of a million human deaths and sending billions of people into lockdown as health professionals struggle to cope with rising numbers of infected patients. At an early stage in the outbreak, bats were identified as a suspected reservoir of the new disease owing to the similarity between SARS coronavirus 2 (SARS-CoV-2), the causative agent of COVID-19, and a bat-borne coronavirus (Bat CoV RATG13), previously identified in intermediate horseshoe bats (Rhinolophus affinis; Zhou et al., 2020) . Although the World Health Organisation emphasizes that "possible animal sources of have not yet been confirmed" (World Health Organization, 2020) , the association between bats and perhaps the worst zoonotic outbreak in modern history, has predictably sparked negative reactions against this taxon (Zhao et al., 2020) . Much of the perceived disease risk associated with bats likely relates to their association with several other high-profile emerging viral zoonoses including the severe acute respiratory syndrome (SARS) coronavirus (CoV), the Ebola and Marburg filoviruses, and the Hendra and Nipah henipaviruses (Brook and Dobson, 2015; Brook et al, 2020) . While bats thus present real risks as hosts for potentially dangerous diseases, several factors need to be considered to understand how this risk fits into the wider context of zoonoses. First, recent research indicates that the number of human-infecting viruses in bats is similar to other mammals, after controlling for the number of species within each order (Mollentze and Streicker, 2020) . Second, ample evidence indicates that the J o u r n a l P r e -p r o o f greatest risks for virus spillover to humans comes from human activities that facilitate the mixing of taxonomically diverse species (e.g., intensive animal farming, live wildlife markets, keeping of wildlife as pets, in sanctuaries, and alongside domestic animals; Johnson et al., 2015) as well as activities that involve, or increase, humananimal interactions (e.g., hunting and habitat destruction and deterioration; . Third, bats make critical and multivariate contributions to human well-being . In light of such factors, the dialogue about bat conservation and batassociated infectious diseases thus poses a wicked problem (Waltner-Toews, 2017): namely, how to appropriately communicate about the risks between bats and zoonoses, without vilifying the former. Message framing of bat-associated diseases is a mammoth challenge that requires close collaboration between virologists, public health officials, conservation scientists and practitioners. Without such collaborations, poorly contextualized or overblown associations between bats and zoonotic risk can swiftly mask the intrinsic (Blackmore et al., 2013; Quinn et al 2014) , ecological ) and economic (e.g. Boyles et al. 2011) importance of bats. In turn, this can propagate unwarranted negative attitudes and consequently lead to both direct persecution and erosion of local support for bat conservation efforts (Lopéz-Baucells et al., 2018) . For example, large flying foxes are key pollinators of durian (Durio zibethinus), a culturally and economically important fruit crop throughout Southeast Asia (Aziz et al. 2017 ). However, some durian growers are now reluctant to support flying fox conservation due to fear of backlash from the public in the aftermath of the COVID-19 outbreak (Tuttle, 2020) . Worse still, reports from other parts of the world suggest a few communities have even sought to cull bats in a misplaced effort to combat the disease (CMS, 2020). Such misguided behaviours present considerable cause for concern, not least because past experience has shown such actions not only fail to eliminate disease risks (Blackwood et al, 2013) but can also increase the risk of zoonotic disease spreading to humans (Olival, 2015) . As a case in point, a study of 20 colonies of common vampire bats (Desmodus rotundus) in Peru found that culling not only failed to eliminate rabies in disturbed colonies but inadvertently led to an increase in the proportion of infected bats compared to undisturbed ones (Streicker et al., 2012) . Similarly, in Uganda, as a response to an outbreak of Marburg hemorrhagic fever, locals culled thousands of Egyptian fruit bats (Rousettus aegyptiacus) but failed to prevent a second, even larger, outbreak some 20 J o u r n a l P r e -p r o o f km from the cave where the culling took place. Worse still, the culling also increased the risk of disease spillover as the Egyptian fruit bats that subsequently recolonized the cave had higher levels of active infection (Amman et al., 2014) . Here, we draw on the latest findings from the psychology of science communication and behaviour change (Buttke et al., 2015 : Decker et al., 2012 Macfarlane et al., 2020) , to highlight some of the major pitfalls for bat conservationists and practitioners, especially when communicating with the public about bats and disease-risk. We do not speculate about the origin of SARS-CoV-2 or further elaborate about the relationship about bats and zoonotic viruses (see e.g., Wood et al., 2012; Brook & Dobson, 2015; Brierley et al., 2016; Brook et al., 2019; Anderson et al., 2020; Johnson et al. 2020 ). Instead, we aim to offer some guidance from the science of science communication (Kahan, 2015) to help ensure that conservation communications are working to neutralize dangerous and unwarranted negative-associations between bats and diseaserisk. Although framed around bats and COVID-19, we believe that the advice presented here may be relevant to other taxonomic groups also linked with zoonoses (e.g. bird conservation in the context of avian influenza); and also to other situations where practitioners need to debunk harmful misinformation (e.g., false health claims about remedies made from body parts of endangered animals) or counteract unwarranted attitudes towards a given conservation issue (e.g. blaming wild carnivores for livestock attacks perpetrated by feral dogs). To improve comprehension and accessibility of our guidelines, we also provide a simplified visual depiction ( Fig. 1 ) of the more detailed guidance we now provide below. Few would deny that the contemporary media landscape has become increasingly used to spread disinformation-misinformation disseminated with the intent to deceive, often for political or financial motives. However, we believe that the growing tide of environmental disinformation (Cook et al., 2018) poses underappreciated threats to conservation objectives (Daly, 2020). Moreover, conservationists appear to be largely unprepared to contain disinformation when it emerges in relation to conservation issues (Kidd et al., 2019a; Thaler and Shiffman, 2015) . In 2016, Oxford dictionary's word of J o u r n a l P r e -p r o o f the year was "post-truth"-defined as "relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief" (Flood, 2016) . Such developments suggest that, whilst the scientific community is pushing an evidence-based agenda, modern society may have arrived upon a new paradigm where what matters is not veracity but holding attention and social signaling (McCarthy et al., 2020) . This often translates into the spread of speculative, misleading, or re-interpreted information as factual (e.g. "bats may be a natural reservoir of SARS-CoV-2" becomes "bats are responsible for COVID-19"). The ease and speed at which such mistruths are shared through social media expediates this spread of disinformation and greatly magnifies its real-world repercussions. When faced with falsehoods and inaccuracies, the scientific community often reacts by directly challenging the misrepresentations (Williamson, 2016; Caulfield, 2020) . However, even after credible retractions of misinformation, people's reasoning often continues to be influenced by that misinformation, a phenomenon termed the continued influence of misinformation (Johnson & Seifert; . Several cognitive factors are responsible for this effect including that people often lack the skills to objectively evaluate information and so have a difficult time discerning between facts and familiar fictions (Bedford, 2010; Cook et al., 2018; Swire et al., 2017) . Also, memory is an imperfect process, such that new information does not perfectly update old information; and recalled memories are vulnerable to outside influences and can result in memory distortion, making it difficult to remember which information was fact and which was fiction (Lewandowsky, 2012) . Consequently, despite our intentions to correct misinformation, whenever we repeat it (even to refute it), our communications can strengthen the association. In other words, by stating that "bats don't spread COVID-19", this can strengthen the association between bats and COVID-19, and consequently many will misperceive, misremember, or simply forget the detail about bats NOT being responsible for spreading the virus. To effectively debunk misinformation and overcome the continued misinformation effect, evidence-based refutations should: J o u r n a l P r e -p r o o f (i) Warn recipients before confronting them with misinformation because warnings enable people to avoid the initial acceptance of misinformation, thus reducing the need for subsequent revision (Ecker et al., 2010) ; (ii) Study the tactics used by those spreading misinformation (Thaler & Shiffman, 2015) and, before misleading stories gain traction, pre-emptively explain the flawed argumentation techniques used (e.g., fake experts, (iii) Repeat the facts, but avoid repeating the misinformation more than necessary (in order to refute it), because repetition can enhance familiarity, which ultimately can foster false beliefs (Jacoby and Kelley, 1989 ); (iv) Use graphical evidence, because visual representations can make counterarguing more difficult and help consumers comprehend data (Dixon, 2015) ; (v) Provide alternate explanations of the debunked phenomenon to fill the mental "gap" left behind from retracting the misinformation (Ecker et al., 2010) . This component of explanation should also address the potential motivation behind the initial source of misinformation (e.g., spread chaos, sell remedies, sell advertising, support an industry, support a political ideology; Nyilasy, 2019). Throughout human history, bats have been both feared and celebrated. For instance, in Mayan mythology, bats were associated with death through the bat god Camazotz. Whereas, in China, they have been traditionally regarded as symbols of good fortune (Kingston, 2016) . Recently, however, negative stereotypes associated with the group (often reflected in misinformed myths, legends, and folklore) have increasingly been Journal Pre-proof Knowledge alone is rarely the sole driver of attitudes and behaviors toward environmental issues (Fielding and Head, 2012) . Instead, people's judgements and decisions are often guided by several heuristics, or mental shortcuts, that evolved to enable us to make quick decisions (Kahneman, 2012) . One way we make quick decisions is by relying on "gut-feelings" rather than more deliberative rational processes. This aspect of our decision-making process is governed by affect-the specific quality of "goodness" or "badness" that becomes associated with an action or item (Slovic, 2007) . Affective reasoning may explain much of the over-reactions towards bats (Kingston, 2016) . Specifically, in the context of zoonoses, negative affect is being irrationally attached to bats, because of the repeated, and thus increasingly familiar, link between disease (bad) and bats (now also feels bad). Many attributes of affective reasoning should inform public messaging about the importance of bat conservation. One key attribute is that our evaluations of risks and benefits tend to be negatively correlated-even when the nature of the risks (or benefits) is both distinctly and qualitatively different from the nature of the benefits (or risks; Alhakami and Slovic, 1994) . For example, if bats are portrayed as high in risk, this will contribute to the perception that they are also low in benefit, and vice-versa. This tendency is further amplified when people have less capacity (e.g., high stress or lack of time) for analytical deliberation (Finucane, et al., 2000) . Evidence on the affect heuristic suggests that the perception of one attribute can be influenced by manipulating information about the other (Finucane, et al., 2000; Ghanouni et al., 2017) . Another key attribute of affective decision-making is that people tend to underestimated large risks, which are mundane and under-reported (e.g., diabetes, stroke, tuberculosis) but greatly overestimate small risks, which are over-reported, sensational, or fearinducing (e.g., shark attacks, tornadoes, and cases of rabies transmission by bats ; Slovic et al., 2006) . One explanation for this bias, is that such affect-laden risks, no matter how improbable, become encoded in people's memory through potent images, metaphors, and emotional narratives that trigger strong reactions and thus also greater media interest, and therefore tend to feel riskier. Unfortunately, wildlife-associated diseases tend to have many traits that can amplify the risk perceptions above the actual risk. Such traits include novelty, potential for high-consequence outcomes (illness or death), and the lack of individuals to control the threats (Buttke et al., 2015) . To effectively alter people's irrational and/or harmful negative associations, whilst always being factual, communications should aim to: (i) Avoid using negative, especially fear-inducing, metaphors or pictures linking wild bats to diseases, as such imagery will be far more memorable than any subsequent rational appeal to conservation outcomes. However, if the messaging was specifically targeted at, and confined to, human-bat interactions, such as hunting, trading, and eating wild bats, then it may be vital to clearly communicate the real health-risks arising from such behaviours (Lu et al., 2016; Tannenbaum et al., 2015) . Nevertheless, practitioners should still not use misleading imagery linking animals in the wild to zoonoses. (iii) Provide factual, awe-inspiring, natural history information about bats, and about the benefits that they provide to natural ecosystems. For example, emphasize their role in the recovery of degraded landscapes via seed dispersal and the suppression of herbivorous insects (Farneda et al., 2018) . Acknowledging the ecological benefits of bats is especially important in risk messaging (e.g., when communicating about rabies), as it can foster greater intention to adopt recommended risk-reduction behaviours, without stigmatising bats (Lu et al., 2016) . (iv) Explain why, if most bat species are left alone, they present little, if any, risk to human health. Where some risk exists for a certain species (Quinn et al, 2014) , then communicate the steps that: people can take to reduce their personal risk (Decker et al, 2012) ; society is taking to reduce the collective risk (Bandura, 2012) ; and that a given technology can help to reduce a particular risk (e.g., explain why research into bats' immune systems may hold the key to ground-breaking antiviral treatments for humans; Kachel, 2016 ). (v) Risks should also be quantified using easily evaluable comparisons to relatively mundane events (e.g., "although rabies is one of the most important zoonotic viruses in bats, at a global-scale, bites from domestic dogs are responsible for over 99% of rabies-related deaths"; Wold Health Organization, 2013). Equally, strive to describe both the high benefits and/or low risks, using easily evaluable comparisons (e.g., "straw-coloured fruit bats (Eidolon helvum) benefit forests by dispersing seeds up-to four times further than other similar-sized frugivores"; Abedi-Lartey et al., 2016) . Recognising the role of social context on people's attitudes and behaviours is paramount to understand human-bat relationships. Kingston (2016) provides a detailed account of how social norms-the rules or expectations about how members of a community should behave-can impact bat conservation. Here, we emphasize the dynamic nature of social norms and how these can be impacted by information regarding zoonoses, while providing best practice on how to use a norm appeal to alter damaging human behaviours towards bats. As individuals, we owe much of our success to other members within our communities. Such cooperation, and therefore our individual success, is often reliant on successfully detecting and adhering to social norms within our perceived community (Simler and Hanson, 2017) . One way to alter harmful behaviours is to employ a norm appealmessaging that aims to alter an undesirable behaviour by encouraging conformity towards more a desirable norm, usually by referring to the existing behaviour of an influential group (e.g., "most farmers in your community have installed artificial bat roosts to enhance pest-control services provided by bats" (Farrow et al., 2017) . Failure to distinguish between different types of norms can result in messages that inadvertently strengthen undesirable norms. For example, stating that "people should J o u r n a l P r e -p r o o f stop harming bats" is also implicitly highlighting the descriptive norm that some people are harming bats, which could encourage others towards that undesirable behaviour (Cialdini, 2003) . In contrast, stating that "most people know bats are harmless and should be protected," may be equally true, but instead it should encourage conformity in the desired direction. To effectively alter undesirable norms and encourage more desirable behaviours, communications should aim to: (i) Avoid reciting adverse norms, such as "Stop harming bats!", as this implicitly suggests that some people are harming bats and can create perception that this behaviour is more acceptable and widespread than reality (Cialdini, 2006) . (ii) Emphasise descriptive norms, such as "The vast majority of countries protect bats, and millions of people live happily alongside them", as this will encourage conformity with the greater majority (Cialdini, 2003) . (iii) Where the desired behaviour is not yet established, highlight the increasing frequency of the desired norm, such as "more and more countries are formally recognising the importance of conserving wild bats" (Rare and the Behavioural Insights Team, 2019). (iv) Highlight norms that are specific to the target populations, as the more a target community identifies with, respects, or aspires to the referent group, the greater the impact of the norm appeal (e.g., "People in your specific community are protecting bats, and benefiting from their role in nature"; White et al., 2009 ). The COVID-19 pandemic, with its associated loss of life, severe human suffering and economic impacts, is due to profoundly re-shape the perceived risks for wildlifeassociated diseases. The fact that-the most similar virus to SARS-CoV-2, identified to-J o u r n a l P r e -p r o o f date, is a bat-borne coronavirus-has engulfed bats into a maelstrom of virus-related news coverage and a related growing tide of misinformation. The reverberations will likely carve long-lasting negative impact on perceptions, attitudes, and behaviors towards bats. As the pandemic continues to unfold, bat-researchers across the world are facing unprecedented pressure to directly engage with the public to contextualize the risks of bat-borne zoonoses and minimize potential backlash against the group. This task is likely pushing many researchers, especially ecologists and conservationists, into unfamiliar territory. While valuable lessons from different sub-fields of conservation science are available to help design conservation messages, such guidance has yet been collated and placed in the context of zoonotic risk. In this article, we outlined some key points that bat conservationists should consider when devising conservation messaging aimed at neutralizing unwarranted negativeassociations between bats and disease-risk. Our advice focuses on three areas of psychological science that we perceived as particularly relevant in the current context. We stress that our advice is not exhaustive, and should be considered within the growing body of literature devoted to zoonotic risk communication (Decker et al., 2010; 2012; , conservation message framing (Kidd et al., 2019a; Kusmanoff et al., 2020) and conservation focused social-marketing, particularly in the context of human-wildlife conflict . In addition to the points highlighted here, communicators should also consider many other factors that influence public reactions to conservation messages. These include, but are not limited to, hyper-saliency, social/cultural context, psychological distance, message framing, message channel, and messenger effects Kidd et al., 2019b; Kusmanoff et al., 2020) . Such factors should not be regarded in isolation, as their interaction can influence the receiver's attitudes and/or behaviours. Furthermore, zoonotic risk often represents a single dimension of the human-bat conflict and acknowledged or latent drivers of animosity towards bats (e.g. reactions to fruit damage by bats or to the noise and smells of rooting colonies) may also affect
Mental disorders and substance use disorders are major contributors to the burden of disease in Australia 1 and worldwide, 2,3 with only a minority of those affected see king or receiving evidence-based treatments. 4, 5 Barriers to care include stigma, cost, and availability of services. 6 The COVID-19 pandemic has created additional challenges, as many traditional mental health providers stopped pro viding face-to-face service. As a result, interest is increasing in the digital delivery of psycho logical services. 7 Digital mental health services (DMHS) remotely deliver mental health information, assessments, and treatment, via the internet, telephone, or other digital channels. DMHS are already part of routine care in several coun tries, operating either as stand-alone services or in con junction with traditional face-to-face care. 8, 9 For example, the Improving Access to Psychological Therapies (IAPT) service of the National Health Service (England) provides both faceto-face and digital services to patients with anxiety or dep ression; a stepped-care approach that allows patients to move from low-intensity intervention (such as guided self-help) to high-intensity intervention (tradi tional face-to-face therapy). 9 Stepped care is not a common feature of stand-alone DMHS, in which patients often report being un willing or unable to access traditional face-to-face therapy. 10 In this paper, we report outcomes from the Australian MindSpot Clinic, which by volume of patients, is one of the world's largest publicly funded DMHS. The MindSpot project was launched in December, 2012, and is funded by the Australian Department of Health as part of the Australian Government's e-Mental Health Strategy. 11 MindSpot provides information about symptoms and local mental health services, brief psychological assess ments, and therapist-guided treatments delivered via the internet and telephone to adults with symp toms of anxiety, dep ression, or chronic pain. We have previously reported results from 12 months 12 and 30 months 10 of operations, characteristics of service users during the COVID-19 pan demic, 13 and treatment outcomes for specific populations, including Aboriginal and Torres Strait Islander (Indig enous) people 14 and people born overseas. 15 In this paper, we aimed to provide a summary of demographic charac teristics and treat ment outcomes for patients registered with MindSpot over its first 7 years of operation, including service use and symptom severity, and examined trends in these characteristics over time. This study was designed as an observational study and is reported according to STROBE guidelines. 16 We evaluated all patients who registered for assessment or treatment with the MindSpot Clinic between Jan 1, 2013, and Dec 31, 2019. Ethical approval for the collection and use of patient data was obtained from the Macquarie University Human Research Ethics Committee (Macquarie University, Sydney, NSW, Australia; approval number 5201200912) and registered on the Australian and New Zealand Clinical Trials Registry, ACTRN12613000407796. MindSpot is funded by the Australian Government as a project and recruitment is ongoing as patients continue to access the service. As MindSpot is funded by the Australian Department of Health, patients seeking assessment or treatment must complete an online registration questionnaire and meet the following eligibility criteria: Australian resident eligible for publicly funded health services (ie, Medicare-funded services); aged 18 years or older; and self-reported principal complaint of anxiety, depression, or chronic pain. Patients are also provided with the terms of use explaining that non-identifiable, aggregated data could be used for reporting and service evaluation purposes. Patients are required to consent to the terms This study describes the characteristics and treatment outcomes of a large sample of consecutive users (n=121 652) of the national Australian DMHS, MindSpot Clinic, from data collected over its first 7 years of operations. We provide information about the demographic characteristics, service preferences, symptoms, and treatment outcomes for people using this particular model of digital service. We found that clinic users represented a broad cross-section of the Australian population, and used MindSpot for a variety of reasons, with most seeking a confidential assessment rather than treatment. We also found that people who engaged in treatment achieved significant reductions in symptoms, which were sustained 3 months after treatment completion. Importantly, these findings confirm the role of DMHS in providing evidence-based assessment and treatment to large numbers of people, many of whom are not accessing other services. The present findings contribute to the evidence base for DMHS in reducing barriers to care, and confirm the utility of DMHS as an important component of contemporary mental health systems. of use, either online or by telephone, before proceeding with assessment and treatment. The People register with MindSpot by creating an account and completing a screening assessment, online or by telephone. The screening assessment includes questions on demographic and service use information, and symptoms and current stressors. Participants are also asked about suicidal thoughts and plans. Those who disclose suicidal plans or intent and who can subsequently be contacted by telephone are administered a structured risk assessment aligned with the New South Wales Government best practice guidelines, 17 and safety plans are developed for all users to assist them to stay safe while seeking treatment or in the event of an increase in symptoms during treatment. 18 Those unable to be contacted are referred to local police for a welfare check. People who continue to express suicidal intent are referred to local mental health services or emergency services, depending on the urgency of the situation. However, patients with suicidal thoughts can also continue to access MindSpot services if they agree to a safety plan. MindSpot operates under compre hensive internal and external oversight and reporting that includes clinical, organis ational, and infor mation technology governance frameworks. The clinical governance frameworks align with Australian national standards for mental health services and include policies, systems, and protocols for identifying patients or others at risk, their management, clinical escalation in the event of increased risk, and training and supervision of staff. People who do not complete an assessment are sent information about managing symptoms, contact details for crisis services, and are invited to contact MindSpot. People who complete the assessment are invited to discuss their results with a therapist by tele phone (appendix p 1), who provides tailored advice over the appointment of approxi mately 25 min. An assess ment report that identifies clinically significant symptoms and includes information about how to access mental health services (including those offered by MindSpot) or other services, is sent by the therapist to the patient and, if requested by the patient, to a nominated health pro fessional, usually a general practitioner. Information is also provided about evidence-based techniques for self-managing symp toms. Participants who complete an assessment and elect for a MindSpot digital treatment course are then enrolled, unless they are considered ineligible for digital treatment by the therapist because their clinical presentation suggests the need for compre hensive or urgent face-to-face assessment. Those cases are sup ported to access specialist services. MindSpot delivers seven digital treatment courses, which were developed and validated in a series of randomised controlled trials at the Macquarie University online research clinic, the eCentreClinic. Four of the treatment courses are based on transdiagnostic principles recognising that people often experience symp toms of anxiety and depressive disorders simul taneously, and that similar psychological skills are used to treat these symptoms. The four transdiagnostic courses offered by the MindSpot Clinic are Mood Mechanic (for individuals aged 18-25 years), the Wellbeing Course (26-65 years), Wellbeing Plus (>65 years), and the Indigenous Wellbeing Course (for Aboriginal and Torres Strait Islander people). 14, [19] [20] [21] [22] These four interventions comprise evidencebased psychological treatment components, including psycho education about mediators and moderators of symptoms, cognitive therapy, behavioural activation, graded exposure, sleep training, communication and inter personal skills, problem solving, and relapse prevention. 19, 20 MindSpot also offers disorder-specific courses for obses sive compulsive disorder, post-traumatic stress disorder, and chronic pain. Patients can choose a treatment course based on symptoms and demo graphic characteristics, and via telephone consul tation with a MindSpot therapist. All courses consist of five lessons delivered over 8 weeks. Each lesson comprises a series of slides that presents the principles of psychological treatment for the target symp toms via text and images, based on an instructional design that accommodates both didactic and case-based learning. 20 Course completion is defined as completion of four or more lessons. Courses are delivered online with regular support initiated from the therapist once a week, either via telephone, secure email, or both. The therapist is also available at any time throughout the course. The approximate amount of therapist time per patient per course ranges from around 1·5 h to 3 h. 10 Therapist time includes all contact with patients, pre paration time for each patient including reading and responding to messages, and administration For the eCentreClinic see www.ecentreclinic.org For the Australian Department of Health national standards for digital mental health services see https://www. safetyandquality.gov.au/ standards/national-safety-andquality-digital-mental-healthstandards See Online for appendix and super vision time during treatment and during follow-up. Course materials are available online, although around 10% of people elect to receive materials via a printed workbook, sent by postal mail. In addition to the therapist-delivered treatment courses, a 6 month trial of telephone-based counselling was conducted in 2018, and a self-guided version of the Wellbeing Course was introduced in 2019, the results of which will be reported elsewhere. Standardised and validated symptom questionnaires are administered to patients at the screening assessment and throughout treatment. For the purposes of this study, treatment out comes on the Patient Health Questionnaire-9 (PHQ-9), Generalized Anxiety Disorder 7-Item Scale (GAD-7), and Kessler Psychological Distress 10-Item Plus Scale (K-10+) were analysed as treatment outcomes. The PHQ-9 consists of nine items measuring symptoms of major depressive disorder according to criteria of the Diagnostic and Statistical Manual of Mental Disorders, 5th edition. 23 Scores range from 0 to 27, with a score of 10 or more indicating a diagnosis of depression. The GAD-7 consists of seven items and is sensitive to the presence of generalised anxiety disorder, social phobia, and panic disorder. 24 Scores range from 0 to 21, with a score of 8 or more indicating the probable presence of an anxiety disorder. 25 The K-10+ was used as a secondary outcome measure to assess general psychological distress and disability. The first ten items comprise the Kessler Psychological Distress 10-Item Scale (K-10), with scores ranging from 10 to 50 and scores of 21 or more associated with the presence of anxiety and depressive disorders. 26 The K-10+ contains four additional questions used to assess the functional effect of the psychological distress. 27 In the current analysis, we used two of the additional questions to assess the number of full and part days a person had been out of role (unable to do usual duties and activities) in the past month. We also report the quantifiable K-10 score. Patients are admin istered the PHQ-9 and GAD-7 at the screening assessment, once a week during treatment (days 1, 8, 15, 22, 29, 36 , and 43), post-treatment (day 50), and at a 3 month follow-up (day 162). Patients complete the K-10+ at the screening assessment, the start of treatment (day 1), midtreatment (day 22), post-treatment (day 50), and the 3 month followup (day 162). Patients also complete a satisfaction questionnaire post-treatment. The satis faction questions we report on are: "Would you recommend this course to others?" and "Was it worth your time doing this course?" All questionnaires are delivered online and patients have 3 weeks to complete the post-treatment and follow-up questionnaires before they are considered closed. We did descriptive analyses of demographics, service preferences, and baseline symptoms for the total sample and for each year. For categorical variables, χ² analyses of linear-by-linear associations were used to examine trends with time. ANOVA was used to examine the significance of changes to continuous variables with time. χ² values represent changes in categorical variables over time, and F-values from ANOVA represent significant differences in dependent variables, with years as the independent variable. Generalised estimating equation (GEE) models with Wald's χ² as the test for significance were used to examine changes in symptom measures from assessment to post-treatment and the 3 month follow-up. 28 Con sis tent with the principles of intention-totreat analy ses, we imputed missing data for all patients starting treatment, using separate GEE models that assumed data were missing at random, and adjusted for baseline symptoms and lesson completion. 29 An unstructured working correlation matrix and maximum likelihood estimation were used, and gamma distribution with a log link response scale was specified to address positive skewness in depen dent variable distributions. We calculated the clinical significance of change in PHQ-9, GAD-7, and K-10 measures using per centage change in symptoms from baseline 30 and within-group Cohen's d effect sizes, based on the esti mated marginal means derived from GEE modelling at the screening assessment, post-treatment, and the 3 month follow-up. Reliable recovery was calculated as the proportion of patients whose scores were higher than the clinical cutoffs of primary measures (PHQ-9 ≥10 or GAD-7 ≥8) at assess ment and lower than the cutoffs post-treatment, with evidence of reliable change. Reliable change was defined as a change of at least 6 points on the PHQ-9 and at least 5 points on the GAD-7. 9,31 Reliable deter ioration in patients who completed treatment was defined as a score increase of at least 6 points on the PHQ-9 and at least 5 points on the GAD-7 post-treatment. 11 Data were analysed with SPSS (version 26.0). A significance level of 0·05 was used for all tests, with the Bonferroni correction applied for multiple comparisons. There was no funding source for this study. During the first 7 years of clinic operation from Jan 1, 2013, to Dec 31, 2019, a total of 121 652 online screening assessments were started, of which 96 018 (78·9%) were completed (figure). The number of people starting an assessment at MindSpot increased consistently from 2013 to 2016, and subsequently plateaued at around 20 000 per annum as directed by funding contracts. A breakdown of completed assessments by year is available in the appendix (p 2). Demographic characteristics of the total sample and by year, representing all those who started the screening assessment, are shown in table 1. The mean age of patients in the total sample was 35·7 years (SD 13·8) and 88 702 (72·9%) were women. During the 7 years of clinic operation, small but significant changes were observed in the age, sex, Indigenous status, employment, education, and mar ital status of people initiating an assessment. For age, we observed a slight increase in the proportion of people aged 18-35 years with time, and less change in the proportion aged 55 years and older (appendix p 3). The proportion of women fluctuated between 71·6% and 74·3%, and the proportion of people married decreased from 42·9% in 2013 to 37·0% in 2019. The proportion of patients born in Australia remained around 78·0%, while the proportion identifying as Aboriginal or Torres Strait Islander increased from 2·1% in 2013 to 4·4% in 2019. We observed some change in employment status over time, particularly in the proportion of students (11·1% in 2013 to 16·0% in 2019), and a concurrent increase in the proportion of people with a university degree (38·3% to 40·8%). The proportion of patients living outside capital cities remained relatively stable (38·7% for the total sample of 2013-18). Proportions of patients from each state and territory are shown in the appendix (p 4). Almost a third of respondents were from New South Wales. Reported psychological symptoms and stressors at the time of the screening assessment by year are shown in table 2. In 7 years of clinic operation, significant fluctuations were observed in symptoms. Mean scores at assessment (baseline) on the PHQ-9 decreased from 15·6 (SD 6·1) in 2013 to 14·5 (6·2) in 2019, with a con current decrease in the proportion of people self-reporting current difficulties with depression during that period. Mean baseline scores on the GAD-7 remained close to the mean for the whole period (12·5 [5·2]; with the exception of 12·9 in the first year), although the pro portion reporting anxiety or worry increased over the 7 years. Mean baseline K-10 scores decreased slightly, from 32·2 (7·5) in 2013 to 31·3 (7·6) in 2019. Based on a series of questions specifically about suicidal thoughts, intentions, and plans, the proportion of people reporting thoughts relating to suicide fluctuated between 29·9% and 34·7%, while the proportion reporting both suicidal thoughts and current intent or a plan increased from 2·4% in 2013 to 3·5% in 2019. Signi ficant changes over time were also observed in reported psycho social stressors. The proportion of people repor ting relationship difficulties increased, while the pro portions of people reporting vocational, physical health, or financial difficulties decreased (table 2) . Service use and preferences by year are reported in table 3. Significant changes were observed in the main reported purpose of using MindSpot among patients during the first 7 years of clinic operation. From 2013 to 2019, the proportion of people using MindSpot primarily for assessment and infor mation increased from 52·6% in 2013 to 66·7% in 2019, while the prop ortion primarily seeking online treatment decreased, from 42·6% in 2013 to 26·7% in 2019. Table 3 also reports the reasons participants gave for using an online service rather than a face-to-face service. Since intro duction of the question in 2015, around a third of respondents consistently reported convenience and (absence of) cost, and another third reported privacy and anonymity as their main reason. Over 7 years, 27·7% to 37·6% patients reported that they had never previously seen a mental health professional. 45·9% to 48·7% patients reported speaking to a general practitioner about their mental Generalized Anxiety Disorder 7-Item Scale Kessler Psychological Distress 10-Item Scale Data are mean (SD) or n/N (%), where numerators are the number of positive responses and denominators are the number of patients who provided an answer to that question. *Questions introduced July, 2017. †Questions regarding the duration of symptoms (anxiety or depression >12 months) were introduced Sept, 2013; for both, the denominator is the number of people who reported current relevant symptoms and provided a response to the duration question. ‡Missing data for the last quarter of 2014 due to system changes. GEE analyses showed significant overall symptom reductions in PHQ-9 (Wald's χ²=29 432·8, p<0·0001), GAD-7 (Wald's χ²=27 731·1, p<0·0001), and K-10 (Wald's χ²=33 261·8, p<0·0001). Pairwise comparisons showed that scores on all measures decreased significantly from assessment to post-treatment and from assessment to follow-up (all p<0·0001). Analyses of the clinical significance of treatment outcomes by year revealed consistent results, with symptom reductions post-treatment for all years on all measures (appendix pp 5-7). This study described the demographic characteristics, service preferences, and symptoms of more than 120 000 users of a national DMHS, collected during 7 years of clinic operations. Users of the service repre sented a broad cross-section of the Australian popu lation, many of whom were seeking a confidential assessment rather than treatment. Those who did engage in treat ment achieved significant reductions in symptoms that were sustained for up to 3 months. The results confirm the efficacy and efficiency of MindSpot in provi ding evidence-based assessment and treatment to large numbers of people, Data are n/N (%), where numerators are the number of positive responses and denominators are the number of patients who provided an answer to that question. *Missing data from July to Dec, 2017, due to system changes. †Data available from July, 2015. ‡Question introduced April, 2015; missing data for 2018 due to system changes. Table 3 : Mental health service preferences and use at assessment many of whom are not accessing other services. Our findings contribute to the evidence in support of DMHS within contemporary mental health systems. Consistent with reports before 2018, 11, 12 MindSpot has continued to serve a broad and geographically dispersed cross-section of the Australian population. Some changes in demographics and symp toms have occurred with time, including an increase in the proportion of young adult users, an increase in the proportion identifying as Aboriginal or Torres Strait Islander, and an increase in people reporting anxiety. A key observation was the increase in proportions of people reporting a primary purpose for contacting MindSpot was to receive an assessment rather than treatment. Many patients reported to therapists that a confidential assessment was the only intervention required at the time of consultation. This finding suggests that a dis cussion with a therapist about the nature of symptoms and treatment options is valued by many people, and can serve as a brief therapeutic intervention in itself. The data also raise important questions about engagement and attrition in digital and traditional mental health services, and whether all patients accessing a service can be assumed to be treatment-seeking. 8 These results confirm our view that DMHS should align with patient-centred models of care, and offer a range of services, including education, assessment, triage, support to access urgent help for people in crisis, and referral, as well as providing evidence-based treatment. With regard to treatment outcomes, the overall magnitude of clinical improvements obtained across the MindSpot treatment courses remain consistently high, with greater than 50% symptom reductions in anxiety and depression post-treatment, which were sustained for up to 3 months. Outcomes compare favourably with bench marks relating to substantial clinical improvement, low rates of deterioration, and high patient satisfaction with DMHS in other countries, 32 including when offered in primary care, 33 and via other initiatives for large-scale implementation of psychological treatment in Australia 34 and the UK. 9 This study has several limitations. We report on characteristics and outcomes of patients registering for assessment or treatment with MindSpot, which restricted our sample to a small proportion of visitors to the MindSpot website (>500 000 per year), and limits the generalisability of our results. We also acknowledge the issue of missing responses, which is a limitation of many studies, particularly those reporting outcomes obtained in routine care, in which patients are receiving a service rather than participating under controlled trial con ditions. The absence of a control group also means that we are unable to account for natural remission or the effect of missing data. However, this limitation was mitigated by the weekly collection of symptom scores during treatment and by conservative statistical modelling, 30 and we found no indication of systematic bias in trends over time due to missing data. A post-hoc analysis did find some evidence that young patients and those with severe symptoms are not necessarily continuing to or completing treatment (appendix p 8), which might affect the generalisability of our results. Generally, we found that several key demographic factors, such as the proportion of people born overseas, distribution by states and territories, Indigenous status (Aboriginal and Torres Strait Islander), and proportion living in rural or remote regions, closely matched national statistics. 12 However, we acknowledge that other factors might be under-represented or over-represented in our sample. For example, the proportion of men contacting MindSpot was always less than 30%, an underrepresentation consistent with reports that men are less likely to seek help for anxiety and depression from traditional mental health services, despite having higher rates of suicide than women. 35, 36 The question of how to engage men in both traditional mental health services and DMHS remains important and might require new service models. Despite these limitations, our results show that a high-volume digital mental health service can be successfully implemented as part of routine care. The main stren gths of this study are the analyses of comprehensive data on a large consecutive sample, combined with the regular measurement of symptoms to monitor treatment effects. Furthermore, treatment results over 7 years match those reported in earlier papers, confirming the robust nature of the digitised clinic procedures and clinical effects. As of 2020, MindSpot has been operating for more than 7 years. In that time, the delivery of health care, including some forms of mental health care, via digital technology has become increasingly acceptable. Services such as MindSpot have shown that digital deli very of care increases accessibility and convenience for patients and can reduce other barriers to care such as stigma. Other key learnings from MindSpot are that DMHS could have an important role in contem porary mental health care, not only by providing treat ment, but also by providing infor mation and assessment services to diverse groups of people that often under-utilise traditional health services, including Indigenous Australians, and people living in rural and remote regions. 8, 14 We maintain that DMHS are not a panacea and should not replace existing services, but instead can complement those services by reducing barriers and delivering evidence-based care to large numbers of patients in an efficient and cost-effective way. 8, 37 People who do not respond to DMHS can then be supported to seek more intensive treatment, consistent with a stepped-care approach. 9 An important feature of DMHS is the potential for systematic measurement of progress via treatment and outcomes, which is rarely implemented with existing service models. 9 By providing services to large numbers of people and routinely collecting and reporting data about user characteristics and clinical outcomes, DMHS are not only providing valuable benchmarking data, but are having a growing influence on the planning of mental health systems across an increasing number of countries. The routine collection and reporting of user data, with the exception of the UK's IAPT model, is not typical of publicly funded psychological services. Thus, such reporting by DMHS is not only increasing unde rstanding among policy makers on the relative strengths and limitations of different service models, but is likely to lead to increased expectations from funders and policy makers for similar reporting from traditional services. In the long term, this influence might lead to policy and funding decisions based more on evidence than traditional practice, but in the short term this will require change in the culture and operations of services that do not routinely collect or report these kind of data. Developing, delivering, and evaluating DMHS is challenging, requiring complex procedures and ongoing evaluation in the context of ever-changing technology and a rapidly evolving governance and regulatory envi ronment. Despite the challenges, we no longer need to question whether DMHS will become part of the frame work of mental health services. The new question is, how will this integration occur and how do we best integrate DMHS with existing face-toface services? Based on the preferences of many patients for more easily accessible, confidential mental health care, we believe that a need will be ongoing for stand-alone services, which provide the option of assessment and treatment and are not always linked with an existing provider. Ideally, existing mental health services should receive support to deliver both face-toface and digital mental health care, and we strongly recommend engagement during the development and implementation of these services with patients and other stakeholders, including policy makers and funders, to ensure that services are not only effective but also acceptable. Mental health professionals could then be trained and equipped to use digital tools with their own clients, to improve both quality of care and collection of treatment outcome data. Without such training and support, patients are unlikely to receive consistently high-quality care, and funders are unlikely to receive data on clinical outcomes to guide service or programme improvements. The MindSpot project has become one of the leading providers globally of DMHS as part of routine care and has deli vered mental health services with proven effectiveness to a large number of Australians in its first 7 years. The consistency of results provides support for the adoption of this model of care within the national mental health system, particularly in the present context of increased consumer acceptance of digital and telephone health-care services. Nonetheless, we maintain that the role of DMHS is to provide consumers and referrers with an additional choice of service model.
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, also known as coronavirus disease 2019 , has led to a higher prevalence of diminished physical activity. A recent study focusing on adults showed that daily step counts have decreased worldwide [1] . However, research on the pediatric population remains lacking. A study on children with congenital heart disease supports decreased Fitbit step counts during the COVID-19 pandemic [2] . Meanwhile, consumer spending on video games in the first quarter of 2020 has increased compared to last year [3] . Hence, we suspect that adolescents are overall more immobile as the result of COVID-19 public health quarantine measures. Decreased mobility could be considered a risk factor for venous thromboembolism even for the pediatric population [4] , resulting in an ischemic stroke through a patent foramen ovale (PFO) [5] . Here, we report a pediatric stroke case likely due to factors described above. A 17-year-old boy without significant past medical history developed acute onset of blurry vision, difficulty speaking, right facial droop, and right upper and lower extremity weakness and numbness while playing video games. He was immediately taken to a hospital where he was evaluated by a telestroke neurologist and found to have a National Institutes of Health Stroke Scale (NIHSS) score of 16. Computed tomography (CT) of the head showed no intracranial hemorrhage. He received intravenous tissue plasminogen activator. Subsequently, a CT angiogram of the head and neck was performed and showed a left P1 segment filling defect ( Fig. 1 ), but he did not receive thrombectomy. He was then transferred to the regional Comprehensive Stroke Center. Magnetic resonance (MR) imaging of the brain revealed diffusion restriction in the anterior medial left thalamus extending to the thalamocapsular junction. On further interview, parents reported that the patient had been playing video games excessively and sitting in the same spot for 10 h daily. This was not the case pre-pandemic, where he was more active with dancing. He had a body mass index of 18 (weight 57 kg and height 1.75 m). Family reported no history of early vascular disease. Serum studies were collected to evaluate for hypercoagulability and arterial inflammatory diseases including anticardiolipin antibody, beta 2-glycoprotein, lupus anticoagulant, fasting homocysteine, antithrombin III, protein S, protein C, activated protein C resistance, factor V Leiden, prothrombin 20210, factor VIII, d-dimer, antinuclear antibodies, anti-neutrophil cytoplasmic antibody, and erythrocyte sedimentation rate. They were all normal. COVID-19 testing was also negative. Echocardiogram showed a hypermobile atrial septum with a 3.2-mm opening consistent with a PFO. Without Valsalva, there was a mild-to-moderate amount of right-to-left shunt at baseline; with Valsalva, there was a moderate amount of right-to-left shunt. Further imaging with lower extremity Doppler and MR of the pelvis did not reveal any deep vein thrombosis. Repeat vessel imaging with MR angiography of the brain and neck did not reveal stenosis, occlusion, or luminal irregularities; notable was that the previously seen left P1 filling defect was no longer present. There was no arrhythmia detected during his hospital course. Patient's imaging did not support an arterial (i.e., dissection, focal cerebral arteriopathy, atherosclerotic disease, vasculitis) etiology as a cause of the stroke. The resolved left P1 filling defect on subsequent imaging pointed toward an embolic nature of the stroke. Over the following 2 days, his NIHSS score continues to improve and he had a score of 3 on discharge. He was prescribed daily aspirin 81 mg and instructed to follow-up with outpatient pediatric cardiology for a PFO closure. Cryptogenic stroke comprises about 25% of all ischemic strokes in adults, with experts advocating for revising the classification framework of stroke by adding on PFO-associated stroke as a distinct entity [6] . Studies have shown that closure of PFO for adults with cryptogenic ischemic stroke is associated with a lower rate of recurrent infarcts compared to medical therapy alone [7] . The most recent American Academy of Neurology practice guideline recommends practitioners to consider PFO closure for patients younger than age 60 with no other identified mechanism of their embolic-appearing ischemic infarct [8] . There are no controlled studies of PFO closure for children with embolic stroke; hence, optimal treatment is unclear. Nevertheless, most case reports describe closure as the planned treatment [9] . In our case, we suspect the patient suffered an arterial ischemic infarct from a PFO with venous thromboembolism as the source precipitated by decreased mobility during the COVID-19 pandemic. A lack of deep vein thrombosis on our workup does not exclude venous thromboembolism; studies have shown less than 50% of patients with proven pulmonary embolism have deep vein thrombosis [10] , suggesting that a source of embolism is often not found. Among the teenagers, sedentary lifestyle, especially when associated with video gaming, has been identified as a modifiable risk factor for venous thromboembolism and reported in multiple case reports [11, 12] . Our case demonstrates that the pandemic has had far-reaching effects, possibly leading to an increased stroke risk among the young due to a sedentary lifestyle.
T he global COVID-19 pandemic has sharply highlighted society's awareness of science and science's impact on individuals and whole societies (Boyd, 2019; Thorp, 2020) . But even before the virus began its spread around the globe, many ecological, social, health and economic problems have been clamouring for attention with society looking for science to develop solutions. At the same time, this greater awareness of the important role of scientific research has also led to more conflicting interactions between the public and the scientific community. Scientists are generally well aware of their responsibilities, and interested and willing to consider the social, economic and political aspects of their research to find new solutions to global problems (Lackey, 2007; Boyd, 2019; Thorp, 2020) . New forms of research organization and governance enable both scientific and socio-political debates; indeed, many argue that the natural sciences should engage more with the social sciences and the humanities to create a more humane and sustainable future. However, there is a growing lack of confidence in or ignorance of science among the general public (Blancke et al, 2015; Grams, 2019; Cardew, 2020) , or even hostile attitudes against particular areas of science, notably agricultural research, vaccine development and, lately, against virology. Conversely, some scientists may be dismissive of public misunderstandings or concerns (Blancke et al, 2015; Cardew, 2020) . The increasing visibility of scientists in the mass media-not just in the wake of the COVID-19 pandemic-may also blur the boundaries between science and society and suggest that scientific debate is similar to a talk show. Such misunderstandings may undermine trust in science while, paradoxically, the general public appreciates science as a source of novel ideas and solutions. In particular, strong-held views-about homoeopathy, against GMO or against vaccines-are often dismissive of scientific evidence and outlook while at the same time using science as a veil for legitimacy (Grams, 2019) . In general, even diverging views and anthropologies can be entwined with a great amount of scientific considerations, and, vice versa, science can accommodate a great diversity of worldviews and anthropologies (Blancke et al, 2015; Sarewitz, 2015; Rovelli, 2016; McCoy, 2020 ). Yet, some worldviews are utterly at odds with scientific considerations: anti-vaccine attitudes or many conspiracy theories about SARS-CoV-2 are not at all supported by any evidence. It is therefore unlikely that science per se is sufficient to foster universal agreement on the common good, as there are limitations to the normative power of science. Scientists should acknowledge that, whatever the certainty or rationality that can be ascribed to scientific knowledge, society is also based on philosophical and political factors and importantly, on the freedom of choice, including erroneous ones (Sarewitz, 2015; Rovelli, 2016; McCoy, 2020) . It is therefore important to analyse and discuss the complex relationship between science and worldviews. However, despite efforts to develop a "metascience", an autoanalysis of science by science may be considered to be a contradiction in terms. For a variety of reasons, natural scientists share, trust and use the scientific method as the means to generate knowledge and place great confidence in its rationality and objectivity. In other words, there is a need to look at science from other standpoints using other methods and principles. Moreover, all research is imbued with human thought, intelligence and creativity, along with prior knowledge, worldviews, values and preferences. Conversely, science cannot be "purely" objective given that it is a brain process influenced by memories, views, values and so on. The complete denial of the influence of worldviews on science may even reflect a bias itself linked to a uniform cultural context. Comparative analysis of texts and ideas, both synchronically (at a given point in time) and diachronically (along the path of time), which are hallmarks of philosophy and the humanities, may provide an independent analysis of the interplay between objectivity/subjectivity and interpretation/ideology in scientific research. In addition to analysing ideas and values, philosophy and linguistics provide critical and semiological analysis of languages and their relationships with meanings and concepts. Such analytical expertise is highly relevant for science, where novel words and expressions are constantly created to describe discoveries and insights. These words and expressions go through processes of maturation, evolution, drift or misunderstanding and are often used in different meanings or contexts. Current use of the word anthropocene thus goes far beyond its definition as a geological era, with a fashionable tendency to describe as "anthropocene" anything from "recent times" to any human activities. Additionally, mass communication and social media twist words and concepts from science to gain new meaning. Words like DNA, mutation, selection, ecosystem, invasive species or biodiversity have become pervasive in the media and everyday language. Words themselves may not be the immediate cause of misunderstandings, but major science and society debates are interwoven with the use of symbolic or emblematic words. The term "genetically-modified organism" was conceived as a straightforward and unambiguous description, but it has given rise to endless discussions induced by unexpected ambiguities for novel biotechnological contexts such as cisgenesis, intragenesis or genome editing. Furthermore, ranking and performance measures put pressure on scientists, laboratories, academic institutions and journals alike to self-promote and self-advertise their results and service. Advertisement and promotion increase the risk that words are misused or wrongly used so as to have a greater impact on media and the public, such as the "blueprint of life" to describe the full sequence of the human genome. Mass communication and social media can further amplify the use of such words or terms and vice versa prod scientists to adopt catchy terms and trendy vocabulary to draw attention. Scientists should be aware that their words and terms are readily transferred and amplified throughout the general public and that this often is at the expense of rigorous description. In the long term, this linguistic bubble of exaggeration and hubris may cause more misunderstanding, more miscommunication or even conflict between scientists and the general public. The plasticity and dynamics of language that is necessary to find new words and metaphors for new discoveries may probably be hampered by strict linguistic rules. Nonetheless, scientific vocabulary and word use may also benefit from an ongoing process of selfreflection and self-improvement through peer-reviewed publications and discussions. Scientific journals could therefore give more space to philosophy and the humanities to publish articles in their own right that analyse the worldviews, anthropologies, language and vocabulary that underlie the scientific articles they publish. Their analytical and contextual clout could help to refine the meaning of conceptual, general or anthropomorphic words and highlight underlying worldviews and anthropologies. The interfaces between science and society could be analysed and discussed from the viewpoints of science, of philosophy and the humanities, hopefully fostering thought-provoking questions and mutual respect between different fields of reason and rationality. It may be argued that philosophical metaanalysis of scientific production of knowledge is too abstract or too fuzzy. It has even been proclaimed that only science can produce an absolute knowledge and that philosophy is outdated (Rovelli, 2016) . However, as the Italian physicist and writer Carlo Rovelli emphasized (Rovelli, 2016) , the arguments of Aristotle in support of philosophy are not outdated: philosophy is at the heart of intellectual activities and helps to clarify perplexities and ambiguities; on the contrary, ignoring philosophical issues can lead to unexpected and irrational biases. Transparency under the light of philosophy and humanities should contribute to upholding and enriching the rigour of science and to restore public trust in the science endeavour.
T he global coronavirus disease 2019 (COVID-19) pandemic is leading to an overwhelming number of patients with acute critical illness who need basic and advanced life support in the ICU. In preparation for the anticipated surge of patients with COVID-19, critical care leaders have grappled with-and now directly confront-challenging questions about which services should be prioritized, which should be reduced, and which should be halted to increase critical care capacity and maximize safety for all. Although clinical research in the ICU is always important, it is a global priority during the COVID-19 pandemic (1, 2) . The ability to appropriately prioritize pandemic-specific research requires quickly constituted or established research teams, a responsive funding system, rapid ethics and contract review, and the commitment of research and bedside staff. Observational studies and randomized trials are imperative to advance our knowledge of pathophysiology, immunology, diagnosis, prognosis, prevention, treatment, triage, and palliation. While hundreds of protocols are being newly developed to understand or mitigate COVID-19, others are ready-made such as the severe acute respiratory infection registry (e.g., Short Period Incidence Study of Severe Acute Respiratory Illness [SPRINT-SARI]) (3), or in place and readily adapted such as the community-acquired pneumonia management trial, augmented now with a pandemic treatment domain (e.g., Randomized Embedded Multifactorial Adaptive Platform Trial for Community Acquired Pneumonia [REMAP-CAP]) (4) . During this pandemic, most institutions have released instructions to focus on pandemic-specific research. Some organizations have required the cessation of research not specifically related to COVID-19, in anticipation of the increase in clinical workload required to care for patients with lifethreatening infection during the pandemic, the need to institute physical distancing for employees, and consideration of limited personal protective equipment (PPE). The objectives of this article are to: 1) describe the importance of critical care clinical research that is not pandemicfocused during pandemic times; 2) outline principles to assist in the prioritization of nonpandemic research during pandemic times; and 3) propose a framework for guiding decisions about whether, when, and how to continue nonpandemic research, while still honoring the moral and scientific imperative to launch research that is pandemic-focused. The perspective of this article is single-site multistudy management. Although intended for those operationalizing research protocols in a single site, many of the principles and considerations can be adapted to single-site methods centers conducting multicenter studies. Using in-person, email, and videoconference exchanges, we convened an interprofessional clinical research group representing medicine, nursing, respiratory therapy, physiotherapy, epidemiology, and ethics. A literature review included empirical studies, ethics documents, and expert commentaries from 2010 to the present, augmented by traditional media and social media posts in March 2020 and April 2020. By telephone and email, we then consulted research institute leaders, senior university scholars, hospital administrators, ethics board chairs, investigators, research staff, clinical directors, and consultants in critical care and infectious diseases in our own hospital, as well as investigators in two other healthcare organizations. This process, and lessons learned from ICU research during the severe acute respiratory syndrome and H1N1 pandemics (5) (6) (7) (8) , informed our approach to balance interests of the public regarding the scope of research during a global health crisis. Clinical research during a pandemic should ideally maximize the benefit to individuals while also maximizing the benefit to society (9) . A pandemic situation may require us to adopt a public health ethics approach, prioritizing community and population health over individuals (10) . Applied to the question of what research to continue, this approach reminds us of the larger good that research can do to improve the health of critically ill patients with and without COVID-19. That is, while clinical research should be prioritized to advantage patients with COVID-19 in order to urgently care for affected patients-ideally, it would be done in a way that does not unduly disadvantage critically ill patients without COVID-19. Thus, timely, rigorous, relevant, and ethical clinical research is needed to improve the care and optimize outcomes for both patients with and without COVID-19 (5, 6, 9, (11) (12) (13) (14) (15) . Such an approach also acknowledges that many previous and many ongoing critical care studies that are not exclusively focused on COVID-19 remain relevant to patients with COVID-19 (16) . We propose the concurrent conduct of research that is pandemic-focused and research that is not pandemic-focused, whenever safe, feasible, and locally approved. Suspension of some studies may be needed, with mechanisms to consider reinstatement at the earliest appropriate time. Continuation may be possible for other studies when certain conditions are met. A transparent process outlining key considerations and objective criteria can help to achieve fairness in decision-making when allocating resources in crisis situations (17)-including research resources. Considerations determining these decisions should also influence approaches to starting new clinical research that is not pandemic-focused-not only while the pandemic unfolds but also as it dissipates. Consider the Status of the Pandemic. COVID-19 has consumed and completely overtaken all available critical care resources, and in some situations, overwhelmed entire healthcare systems, rendering any research extremely challenging if not impossible (18, 19) . The pandemic burden in each local context will dictate whether and what research is appropriate and realistic. Research should not be conducted if it will avert necessary clinical knowledge and skills, or require space, PPE, and other key resources that are required for an optimal clinical response to the outbreak (20) . Consider Jurisdictional Guidance. Jurisdictional guidance regarding research during the COVID-19 pandemic has been variable, as international monthly self-reported surveys indicate (21) . Responses have ranged from institutional silence, to suggestions for investigator discretion on suitable studies to conduct, to mandates and associated funds to focus exclusively on pandemic-specific research, paired with directives to suspend all nonpandemic research. Just as during inter-pandemic periods when institutional sanctions influence academic operations, local jurisdictional guidance is the starting point for local deliberations about which research to conduct during the pandemic. Consider the Capacity of Research Personnel. The capacity of research personnel is a key determinant of the conduct of both nonpandemic and pandemic-specific research. Clinically trained research staff with up-to-date professional credentials (e.g., nurses, respiratory therapists, physiotherapists, and physicians) may need to be deployed to the frontline to care for patients as the pandemic progresses. Research staff may also be affected by illness, precluding any research whatsoever. On the other hand, research opportunities for staff working on paused research, or in other areas closed during the pandemic (e.g., outpatient clinics, elective surgery), could fortify existing critical care research personnel. Specialized personnel are often required for both pandemicfocused and nonpandemic-focused research. For example, if research pharmacy staff are reassigned to clinical pharmacy activities, pharmaceutical studies may become difficult to pursue. Studies requiring the procurement and processing of biological specimens may be impossible if protective measures are too resource intensive, or if laboratory research staff are overwhelmed with the demands of COVID-19 testing to meet the hospital's basic clinical needs. Consider the Safety of Research Personnel. For any clinical research-be it pandemic-focused or not-strategies are needed to minimize or replace typical face-to-face research interactions (e.g., for informed consent, questionnaires), replacing these with other methods (e.g., telephone consent, videoconferencing). Provision for off-site work for clinical research staff may require new safeguards to ensure confidentiality of identified data on personal computers or home networks. Timely administrative approval to access hospital servers may be needed for remote electronic medical record access. On-site work that is central to research conduct during the pandemic should involve only the minimum number of essential trained research staff who agree to carry out this work without coercion or concern for consequences regarding safety and job security. It is crucial that on-site research personnel receive safety and PPE training and that safety protocols and guidelines be reviewed during rapidly changing working conditions. If nonpandemic-focused research is restricted, investigators should identify the current status of patients already enrolled in these studies (e.g., receiving the study intervention, undergoing follow-up assessments) to determine whether any interventions must continue for patient safety. For example, some study interventions may be dangerous to terminate (e.g., a drug which could lead to withdrawal if stopped). Strategies should be developed to complete the treatment course and collect data on at least the primary outcome, if safe and feasible. If remaining assessments require in-person data collection (e.g., physical function performance-based measures), collecting the primary outcome(s) should be prioritized while determining if any data could be collected using alternate methods (e.g., questionnaires via secure video link or telephone). Patients or their substitute decision-makers should be notified about any relevant changes to the status of their study participation in light of the pandemic. All stakeholders should consider how their institution and research program can best serve patients during the pandemic. All studies should be reviewed and a portfolio of studies selected based on the center's capacity, case mix, and clinical and research expertise. Necessary adaptations of non-COVID research should be considered during this process such as considering the suitability of COVID-19 patients for enrollment (as long as this does not preclude enrollment in COVIDfocused studies). Consider whether it is relevant to revise case report forms and databases to document COVID-19 status. When reviewing and selecting studies to continue, consider leveraging preapproved studies that could specifically apply to those with COVID-19. For example, consider continuing ongoing studies relevant to conditions with high morbidity and mortality in the general ICU population, such as therapies for severe sepsis and septic shock (e.g., balanced vs unbalanced crystalloid [e.g., Fluids and Septic Shock (FISSH)] [22] or vitamin C [e.g., Lessening Organ Dysfunction with Vitamin C Trial (LOVIT)]) (23) . The LOVIT trial obtained specific Health Canada and research ethics approval to enroll patients with COVID-19, acknowledging that viral infections can cause septic shock, and recognizing that vitamin C was prioritized by the WHO as a treatment for investigation in COVID-19 (24) . Other ongoing trials may have particular pathophysiologic relevance during the pandemic (therapeutic heparin [e.g., (26) . The process of reviewing each study should consider protocol complexity. Some protocols may be simple, require no additional time of bedside staff or research staff, and consume no PPE, thereby maximizing the benefits produced through the allocation of scarce resources to research (9) . Two such examples would be the Bacteremia Antibiotic Length Actually Needed for Clinical Effectiveness (BALANCE) trial, comparing 1 versus 2 weeks of antibiotics for bacteremia (27) and the Revisiting the Inhibition of Stress Erosions Study (REVISE) trial comparing acid suppression versus placebo for stress ulcer prophylaxis (28) . The former trial requires no extra hospital resources; the latter requires additional research pharmacy time to prepare study drugs. More complex nonpandemic-focused trials may need to be paused. For example, the Trial of Early In-bed Cycling For Mechanically Ventilated Patients (CYCLE) trial of in-bed cycling requires bedside staff time and PPE that physiotherapists would use in usual care (29) , but also transferring an ergometer into the patient's room and cleaning it thereafter, followed by outcome assessments on the wards (30) . Consent requirements are an important consideration. Waived consent for low-risk observational studies and registries may be suitable, as is often the case during nonpandemic times. Studies with approved alternate consent methods such as witnessed verbal telephone consent, deferred consent, twophysician consent, delayed or waived wet ink signature confirmation, or email e-signature confirmation may be easier to continue. These approaches allow timely study enrollment and concurrently honor the ethical principle of autonomy in the research process while respecting physical distancing. Reviewing the portfolio of research conducted in a single center should also consider opportunities or contraindications to coenrollment, which is the practice of enrolling patients in multiple studies either concurrently or sequentially. Some studies will be more viable for coenrollment than others. Where possible, coenrollment in COVID and non-COVID trials should be considered. Nonpandemic-focused studies evaluating commonly available interventions (rather than new biological agents) often allow coenrollment according to scientific, logistic, and ethical guidelines (31) . Whatever their focus, trials designed to reduce mortality invariably allow coenrollment into studies aimed at humanizing end-of-life care, which is particularly important given restricted bedside family presence and communication barriers due to PPE during the pandemic. For example, the 3 Wishes Project (32), involving eliciting and fulfilling wishes for dying patients from families remotely, patients when able, and their clinicians, would not interfere with interventions being tested in other trials. Existing, adapted, or newly crafted coenrollment policies will also influence which nonpandemic studies to continue. When coenrollment is not possible, generally, pandemic-focused research should be prioritized. However, case-by-case decisions could consider patient-specific risk:benefit assessments, study-specific logistics, and the values of the patient or substitute decision-maker if feasible. Review the relevance and resource requirements for each study. Every study involves opportunity costs including human time and financial resources. A run-in phase or important pilot work for unfunded pandemic research may be needed before securing future funding. Continuing nonpandemic-focused studies may confer financial stability to research teams and maintain accountability to granting agencies while awaiting funding decisions for COVID-19 investigations. To illustrate how these principles may be applied in Table 1 , we present an application of this framework to the consideration of studies in our center that were ongoing when the pandemic began or considered for start-up in response to the pandemic. Consider Research Oversight. Pandemic mitigation efforts could interfere with all aspects of a successful clinical trial, including informed consent, accrual, intervention delivery, and safety monitoring and outcome assessment (16) . Studies conducted during pandemic periods-whether pandemic-focused or not-should be held to the highest possible standards of implementation fidelity considering the extenuating circumstances. Therefore, when deciding to continue nonpandemicfocused research, centers should examine each study to ascertain whether research integrity can be maintained throughout the pandemic period. Existing research protocol implementation may need to be adapted. Modifications may relate to informed consent (e.g., alternate informed consent methods). Enabling and evaluating protocol adherence may need to be done remotely rather than on-site and may need to be retrospective rather than real-time. To keep safety assessments as current as possible, research staff phone calls or automatic e-alerts within the electronic medical record should be considered. Centers may consider prioritizing data collection and entry for trials addressing the efficacy, safety, and futility of pandemic-specific interventions to hasten the analysis and dissemination of their results. Some data collection of nonpandemic-focused studies may need to be delayed. For centers with paper-based patient charts, data collection may need to be adapted, such as batching data collection, scanning daily flow sheets to the research office, or postponing noncritical data until medical records are uploaded into the hospital electronic charting system. Some data may be foregone if ascertainment requires real-time on-site assessment which is precluded by physical distancing. Pandemic-specific standard operating procedures should be enacted to track any modifications to the protocol implementation for each study in your center. Document decisions in consultation with investigators, steering committees, sponsors, and other local stakeholders. Any changes should be approved by the relevant local institutional authorities and reported to ethics boards per local guidance. Reconsider Decisions Regularly. As the pandemic continues and institutional impacts evolve or resolve, revisit research decisions regularly with a variety of stakeholders including representation from clinical staff, hospital and university leadership, ethics and regulatory authorities, funders, research staff, and investigators. This stakeholder consultation should respond rapidly as the pandemic evolves, to receive feedback about progress and problems, and remediate as necessary. This group will be important when considering how to reinstitute research as the burden of the pandemic abates. When paused research is reinstated, seek broad input and start first with familiar and less complex studies, so as not to unduly burden individuals affected. Contingency plans should be developed for prompt cessation of recruitment in each study and follow-up of patients on protocol in case the pandemic surge overwhelms research capacity for any study. This plan should include alternate research management and local study oversight should staff or investigators become ill. Consider Final Reporting Requirements. After the pandemic subsides, investigators should consider whether any changes or pauses to research during the pandemic have affected the internal validity or external validity of each study in your center. Periods of paused enrollment should be reported to the Methods center for each study. Methods centers for single or multicenter studies should report temporary adaptations to their trial, if any (16) . Consider whether changes are warranted to the statistical analysis plan, including characterizing patients with COVID-19, approaches to missing data, or post hoc subgroup analyses if sensible and sample size permits. We did not address other relevant issues such as how discontinuing nonpandemic-focused research during pandemic times may have cascading consequences beyond delaying study results. Sequelae may include lost staff time, contract modification, or staff unemployment. If ongoing studies are completely terminated, efforts to-date including patient contributions and research funds may be wasted. Decisions to halt the generation of medical knowledge should be made with awareness of opportunity costs in the short-and long-term for individuals and society (16, 33, 34) . This report did not benefit from the input of patients or the public, nor agencies funding ongoing studies. We did not undertake formal document analysis of hospital, university, or government policies. During the H1N1 pandemic in Canada, only 7% of critical care research coordinators reported deferring ongoing or planned non-H1N1 studies to facilitate H1N1 studies (8) . Although we did not seek information on the influence of the COVID-19 pandemic on clinical research in other jurisdictions, an international survey is underway (21). Clinical research will play a vital role in understanding the influence of COVID-19 on critical illness, informing patient care around the world. While research is key in the response to public health emergencies, it must never impede clinical response efforts. Several lines of reasoning are needed to balance the interplay between COVID-19 specific studies and other studies, without jeopardizing the care of patients or the safety of staff. During the pandemic, research should not focus exclusively on the potential health needs of some individuals while neglecting the health needs of others. Clinical research is essential to improving the process and outcomes of care both for patients with and without COVID-19. The benefits and burdens of research should be equally distributed where possible or allocated according to objective and transparent decision-making processes. We propose that decisions to pause or pursue nonpandemic research during pandemic times be made following careful deliberation based on objective criteria. Considerations include aspects of the research process for each study such as roles of bedside and research staff, the informed consent model, intervention complexity, protocol integrity, data collection, and infection control concerns such as use of scarce PPE. This framework considers capacity evaluation, safety assessments, and local approval. Plans to continue nonpandemic research should be proportionate, transparent, informed by key stakeholders, and revisited as the pandemic abates.
The outbreak of novel coronavirus disease 2019 (COVID- 19) , caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), began in December 2019. 1, 2 It has led to a pandemic which caused over 17 million infected people and 600,000 deaths in over 200 countries/regions so far. 3 This new pandemic has also created an unprecedented burden on healthcare systems throughout the world, highlighting the urgency to improve the hospital management and early identification and stratification of patients. SARS-CoV-2 is considered principally as a respiratory pathogen and the respiratory symptoms are the most common symptoms. However, extrapulmonary manifestations or injury should not be ignored as well. It has been reported that pre-existing cardiovascular diseases are associated with worse prognosis in COVID-19 patients. 4 In contrast, cardiovascular complications caused by COVID-19, including arrhythmias, myocardial infarction (MI), myocarditis, and heart failure (HF), and so forth are also significant contributors to increased mortality and rate of admission to intensive care unit (ICU) of COVID-19 patients. [4] [5] [6] [7] [8] [9] Therefore, it is of the greatest importance to explore the damage caused by the SARS-CoV-2 to the cardiovascular system. Meanwhile, it is also of considerable value to identify risk factors for predicting potential cardiovascular complications at an early stage, which could guide and optimize the therapies for COVID-19 under the circumstance of limited healthcare resources. However, related studies are still insufficient up to now. Early prediction models that combine clinical features to identify COVID-19 patients at high risk of cardiovascular events remain poorly defined and challenging to investigate. The current retrospective, multicenter, observational study was conducted aiming to develop and validate a novel risk score used for predicting cardiovascular complications, and also to assess the relationship between these complications and prognosis among COVID-19 patients. This retrospective, multicenter, observational study of laboratory-confirmed COVID-19 patients was conducted in accordance with the amended Declaration of Helsinki and approved by the West China Hospital of Sichuan University Biomedical Research Ethics Committee (No. 2020-272). Written informed consent was waived because of the urgent need to collect clinical data and retrospective observational design. Clinical data of hospitalized patients from two major COVID-19 designated hospitals (Wuhan Red Cross hospital and People's Hospital of Wuhan University) in Wuhan city, and 36 COVID-19-designated hospitals in Sichuan province, China between January 14 and March 9, 2020 were collected and analyzed. All included patients were randomly divided into training set and testing set (70% percent vs. 30% of patients). The training set was used to develop a risk score and a testing set was applied to validate the robustness and generalizability of the risk score. All patient data were anonymously recorded to ensure confidentiality. Two doctors reviewed the medical records of all patients independently. Any disagreement was resolved through the third doctor and team discussion until consensus reached. All patients enrolled in this study were diagnosed with confirmed COVID-19 according to World Health Organization interim guidance. 10 The confirmed case is defined as positive result of the nucleic acid of SARS-CoV-2 by real-time reverse-transcription polymerase chain reaction. The following exclusion criteria were used: (1) under 18 years old; (2) being pregnant; (3) died or having missing baseline data on admission; (4) recovering from cardiac arrest/cardiopulmonary resuscitation. Demographic characteristics, basic vital signs, symptoms and signs, comorbidities, chest computed tomography (CT) scan images, and laboratory examinations data were retrospectively collected from electronic medical records. All these baseline data were recorded at admission or within 24 h after admission to hospitals. Continuous variables were categorized for further analysis. The threshold value of each continuous variable was determined by the clinically relevant cut-off value, or upper limit or lower limit of the normal range. Two doctors completed the data collection independently. The occurrence of cardiovascular complications was considered if any of the following appeared during hospitalization: (1) acute myocardial injury; (2) acute myocardial infarction (AMI), including non-ST elevation or ST elevation MI; (3) new or worsening HF; (4) de novo arrhythmia; (5) deep vein thrombosis (DVT) or pulmonary embolism (PE). This composite endpoint has been used in previous studies that evaluated the cardiovascular complications of pneumonia. 11, 12 According to Fourth Universal Definition of Myocardial Infarction, 13 myocardial injury was diagnosed by the detection of elevated cardiac troponin with at least 1 value above the 99th percentile upper reference limit, and criteria for AMI were acute myocardial injury with at least one of the following: symptoms of myocardial ischemia; new ischemic electrocardiograph (ECG) changes; imaging evidence of new loss of viable myocardium or new regional wall motion abnormality; identification of a coronary thrombus by angiography or autopsy. New or worsening HF was considered in patients with clinical signs (such as pulmonary edema, acute congestive HF, cardiomegaly, vascular congestion, etc.) and supportive findings on ECG or chest radiograph. 14 De novo arrhythmia was determined on the basis of a new episode of arrhythmia documented by ECGs during hospitalization, which was not detected before hospital admission. DVT or PE was considered on the basis of clinical manifestations and supportive findings of ultrasound or angiography CT. Likewise, two doctors reviewed and checked the diagnosis of cardiovascular complications independently. Any disagreement was resolved through the third doctor and team discussion until consensus reached. Data were analyzed by using IBM SPSS Statistics version 23.0 (SPSS). Data were expressed as mean ± standard deviation or median (interquartile range) for continuous variables, as well as counts and percentages for categorical variables. The data were tested by the Kolmogorov-Smirnov normality test and Bartlett's test for homogeneity of variance. The difference between the two groups was tested using a two-tailed independent Student's t tests for normally distributed continuous variables, the Mann-Whitney U test for non-normally distributed continuous variables, and χ 2 test or Fisher's exact test for categorical variables. Variables with p < .10 were included in univariate and multivariate logistic regression analysis to identify independent risk factors. The odds ratio (OR) and confidence interval (CI) were used to evaluate risk factors. The score for each independent risk factor was assigned as an integer value close to the regression coefficient. The total risk score of each patient is the sum of each single score. To assess the accuracy of risk score as a predictor of cardiovascular complications of COVID-19, the receiver operating characteristics (ROC) curve was conducted and the area under the ROC curve (AUC) was reported. The optimal cut-off point of the risk score was based on the Youden's index of ROC curve, corresponding to the maximum joint sensitivity and specificity. After that, the variables required for calculating the risk score were collected, calculated, and tested for the testing set. The performance of risk score in training set, testing set, and overall patients were compared. We also conducted survival analysis in all patients using the Kaplan-Meier method and log-rank test to explore the impacts of cardiovascular complications on the prognosis of COVID-19 patients. p < .05 was considered statistically significant. A total of 1240 patients confirmed with COVID-19 were retrospectively enrolled in the study. Ultimately, 33 patients were excluded according to the exclusion criteria, and 1207 patients were analyzed. Among them, 845 patients (70% of overall patients) were randomized to training set and 362 patients (30% of overall patients) were included in the testing set ( Figure 1 ). In the training set, 122 (14.4%) patients were found to have cardiovascular complications. Among them, 98 patients had acute myocardial injury and 5 patients further developed AMI. A total of 8 patients had new or worsening HF and 30 patients had de novo arrhythmia. Additionally, only one patient was diagnosed with DVT. The rate of male and the median age of patients with complications were both significantly higher than those of patients without complications (63.9% vs. 44.3%, p < .001; 64.5 vs. 53 years old, p < .001). There were also some other significant differences between patients with or without cardiovascular complications in terms of fever (p = .031), cough (p = .002), weakness/fatigue (p = .035), unconsciousness (p = .036), chronic heart disease (p < .001), diabetes mellitus (p = .036). Furthermore, some differences in laboratory examinations were also demonstrated. Compared with patients without cardiovascular complications, patients with cardiovascular complications had higher white blood cell counts (6.11 vs. 5.43 × 10 9 /L; p = .001), neutrophil counts (4.73 vs. 3.29 × 10 9 /L; p < .001), aspartate aminotransferase (30.5 vs. 23.7 U/L; p < .001), and so on; however, lower lymphocyte counts (0.9 vs. 1.25 × 10 9 /L; p < .001), platelet counts (184 vs. 207 × 10 9 /L; p = .005), estimated glomerular filtration rate (eGFR; 85.8 vs. 102.3 ml/min/1.73 m 2 ; p < .001), and so forth. The detailed baseline characteristics of the patients are shown in Table 1 . The factors with p < .10 in Table 1 were added into the logistic regression model analysis. In univariate and multivariate analysis, continuous variables were converted to categorical variables. Finally, 10 independent risk factors associated with cardiovascular complications were identified: male 9.14) . The details and corresponding score for each risk factor are showed in Table 2 . As a result, the total risk score for each patient varied from 0 to 23 points. Finally, an optimal cut-off value of Figure 2 ). In the testing set, 62 (17.1%) patients were found to have cardiovascular complications. The accuracy of the risk score in the testing set (AUC: 0.756; 95% CI: 0.690, 0.822) was similar to that in the training set. Furthermore, the AUC of the risk score in overall patients (training set plus testing set) was 0.766 (95% CI: 0.726, 0.806). The optimal cut-off value is also 7.5 (specificity: 0.620; sensitivity: 0.785). The results are summarized in Table 3 . Generally, our results were relatively stable and reliable and the novel risk score had generalizability, to some degree. In addition, cardiovascular complications were significantly associated with poorer survival (log-rank test: p < .001) in the Kaplan-Meier curves, which showed that cardiovascular complications had considerable adverse impacts on the prognosis of COVID-19 patients (Figure 3 ). To our knowledge, this is the first predictive tool used for predicting cardiovascular complications among COVID-19 patients at admission to hospital. Ten independent risk factors at admission were identified and the total score varied from 0 to 23 points for each patient. The risk score has also been validated. A higher total score is correlated to increased risk of cardiovascular complications which might lead to significantly poor prognosis of COVID-19 patients. Therefore, early prediction of cardiovascular complications is important and necessary. Wei et al. 15 have ever conducted a similar study and found age, pre-existing cardiovascular disease, eGFR, and procalcitonin were associated with acute myocardial injury (defined as an hs-TnT [high-sensitivity troponin T] T A B L E 2 Independent risk factors associated with cardiovascular complications in multivariate analysis and corresponding risk score in training set Multivariate OR (95% CI) p value Score value >14 pg/ml) in COVID-19 patients. Meanwhile, acute myocardial injury was more correlated to severe/ critical, admission to ICU, mechanical ventilation, and death. In the current study, more baseline variables were included. Furthermore, the primary outcome of composite cardiovascular events was adopted because they should all be considered as important cardiovascular complications of COVID-19. 16 It is believed that a risk score might enable more accurate identification and stratification than a single predictor for COVID-19 patients. Therefore, based on independent risk factors, we developed a novel, practical predictive risk score. As a bedside tool, this score system comprehensively consists of factors of demographic characteristics, symptoms, comorbidities, and laboratory examinations. According to previous reports, cardiovascular complications were common clinical manifestations and could increase the health burden of SARS and influenza in the last decades. 17, 18 The potential mechanisms for cardiovascular complications during SARS-CoV-2 infection has not been thoroughly explained. Possible mechanisms include direct virus-mediated cardiotoxicity, hypoxia-related injury, immune-mediated cytokine storm and systemic inflammation, and so forth. 19 Infection of the pericardium causing massive edema and myocardial fibrosis or scars have also been put forward in a recent review. 20 Besides, increased heart burden and cardiopulmonary dysfunction caused by SARS-CoV-2 infection might be responsible for the myocardial ischemia, worsening HF, and new arrhythmia. Previous studies have demonstrated that myocardial injury occurred in about 10% of patients with 9 In the current study, composite cardiovascular complications were reported in 15.2% of COVID-19 patients. We included 917 (74%) patients from Wuhan city, a high-risk and high-prevalence area, and 323 (26%) patients from Sichuan province which is a low-risk district. Furthermore, these patients were randomly assigned to either the training set or the testing set in an attempt to draw a relatively comprehensive and fair conclusion. However, it must be noted that incidence and types of cardiovascular complications might be associated with severities of illness and population characteristics. Future studies are warranted to verify our conclusions. In the risk score, some factors have been widely confirmed to be connected with cardiovascular events, including male, older age, chronic heart disease, prolonged APTT, elevated D-dimer, and so on. Their predictive values for cardiovascular complications have also been demonstrated in community-acquired pneumonia. 12 However, some arisen independent risk factors in multivariable analysis were unexpected or uncommon, which should be treated cautiously. Irwin 21 has ever conducted a literature review and found cough, an effective means of clearing the airways, could also cause a variety of complications, including cardiovascular complications. However, the impact of cough on patients still remains to be clarified. Decreased lymphocyte count is common in COVID-19 patients according to previous studies. 4, 9 Systemic inflammatory response and immunocompromised status might be responsible for it. One previous study also reported that the number of helper T cells and suppressor T cells both significantly decreased, and were more impaired in severe COVID-19 cases. The authors concluded that SARS-CoV-2 might mainly act on lymphocytes, especially T lymphocytes. 22 It is also reported that lymphopenia plays a role in accelerated atherosclerosis and increased incidence of cardiovascular events. 23 The association between blood urea nitrogen or eGFR and cardiovascular complications in COVID-19 is still unclear now. However, our results are consistent with a previous study of influenza. Nin et al. 24 have revealed that patients with AKI presented more cardiovascular dysfunction compared with those without AKI in 2009 influenza A (H1N1) viral pneumonia. Procalcitonin often has high accuracy for the diagnosis of bacterial infections in clinical practice. It is understandable that patients with cardiovascular complications were more likely to be severe cases and coinfected with bacteria due to immunosuppression. Additionally, in one previous population-based prospective study, procalcitonin was also found to be correlated to several of the established cardiovascular risk factors (C-reactive protein, hypertension, renal function, etc.) and positively associated with cardiovascular events and cardiovascular death. 25 We found that some risk factors in the current study were also applied in previous similar prediction models for critical illness, admission to ICU, or mortality of COVID-19 patients. For instance, Gong et al. 26 developed a nomogram in which older age and blood urea nitrogen were associated with severe COVID-19. In another clinical prediction model for in-hospital mortality of COVID-19 patients, age, history of heart disease, lymphocyte count, D-dimer, and eGFR were used. 27 It has to be acknowledged that the AUC of the current risk score is inferior to those of the above prediction models, which varied from 0.8 to 0.9. We speculated that some baseline data were missing due to a retrospective study design, which might have compromised the discriminatory power of this risk score. Then, it is possible that the endpoint of cardiovascular complications was more heterogeneous than that of severe illness or death. And the severities of various cardiovascular complications were not identical among included patients. However, our study has a larger sample size (over 1200 patients). Clear and widely accepted definitions were adopted for all included patients and the diagnosis of cardiovascular complications was checked by different researchers to guarantee the accuracy of our conclusions. Given the lack of specific antiviral agents for SARS-CoV-2 and significant adverse effects of cardiovascular complications on the prognosis of patients, early identification of COVID-19 patients at high risk of cardiovascular complications, timely intervention, and protection of target organ are important and essential. In the current study, all risk factors are easy to obtain at admission to hospital and this risk score has been well developed and validated with promising predictive capacity. It might help clinicians make optimal treatment decisions for patients who are prone to develop cardiovascular complications, and help researchers explore more detailed systemic damage and pathophysiological mechanisms of COVID-19 in the future. There are several limitations to our study. First, it was a retrospective observational study with potential unavoidable selection bias. Second, the sample size was relatively moderate, and the number of patients was not equal between groups with or without cardiovascular complications. Third, the drugs and therapies before admission might have considerable impacts on our results. Then, we only recorded cardiovascular complications before discharge without follow-up. The long-term damage of SARS-CoV-2 to cardiovascular systems and related risk factors remain to be explored. Further well-designed, multicenter, long-term studies with better comparability are warranted to clarify the characteristics of these risk factors and verify our conclusions. We developed and validated a novel risk score, which is based on 10 risk factors at admission: male, age ≥ 60 years, cough, chronic heart disease, lymphocyte count ≤1.1 × 10 9 /L, blood urea nitrogen ≥7 mmol/L, eGFR ≤ 90 ml/min/1.73 m 2 , APTT ≥ 37 s, D-dimer ≥ 0.5 mg/L and procalcitonin ≥0.5 μg/L. This risk score has a promising predictive capacity for cardiovascular complications which could significantly impair the prognosis of COVID-19 patients. Our conclusions need to be further confirmed in further studies.
infection, Primary Care Physicians and Infectious Disease Doctors are consulted, as well as Resuscitation Specialists. 1, 2 Although at the moment the role of the Allergists seems to be secondary to that of the other specialist physicians mentioned above, allergic rhinitis remains one of the most common diseases and will continue to affect about 10% to 40% of the world population varying according to the geographic area. 3 There are many questions that Rhinologists together with Allergists should answer. For example, what effects are the lockdown and the quarantine imposed by all governments going to have on the course of the various allergic diseases? With the present study, we aimed to answer this last question, focusing our attention on dust mite allergy. Particularly, we tried to understand if the course of the disease of patients suffering from dust mite allergy was negatively affected by the COVID-19 restrictions, which have been certainly important to fight the pandemic, but forced patients to stay at home for a long time. Forty-five patients allergic to dust mites (23 males, 22 females; median age 32) visited at the Otolaryngology (ORL) Department of Foggia and Bari University Hospitals participated in this study. Out of this group, 13 (28.8%) individuals were allergic to multiple allergens, but, when the study was carried out, it was not the timing of their allergy season. As concerns the comorbidities, 13 (28%) subjects were asthmatic and 2 (4%) had aspirin sensitivity. Fulfilling telehealth consultations in accordance with the guidelines from the Higher Institute of Health (ISS), patients attended phone interviews during COVID-19 lockdown and were questioned about their sinonasal symptoms from March 9th to April 9th, 2020, by answering the sinonasal outcome test (SNOT-22) questionnaire. 4, 5 This last questionnaire was selected as a statistically significant correlation (P < .001) was evidenced between SNOT-22 and Rhinitis Control Assessment Test (RCAT). 6, 7 Further data concerning the medications used to treat allergy and the number of days per month in which they were used were collected. Patients' responses about the COVID-19 lockdown were compared to those collected in our clinics during the same time frame of 2019. Patients in essential jobs (ie, health workers, security guards, policemen, transport workers) were excluded, as their level of exposure was potentially the same of 2019 due to their work duties. Additionally, patients with uncontrolled symptoms of asthma, as well as subjects who underwent immunotherapy and/or changed medications were not included. Moreover, nobody had a fever; about their history, they reported neither COVID-19 cases in their families nor suspected contacts, as they were quarantined at home according to the Italian regulations. The assessment of significant differences across the means of continuous variables relied on the paired sample t test considering as significant values with P values <.05. To assess distribution of the variables, we used the Bartlett test. P values <.05 were considered significant. Data analyses were performed with STATA-MP software, version 15. Our results confirmed that the lockdown ordered by the Italian government, although necessary, has negatively influenced the clinical history of patients with dust mite allergy. All SNOT-22 scores concerning the COVID-19 lockdown were higher than those of the previous year (Table 1) ; however, only some clinical parameters, such as "nasal obstruction," "runny nose," and "need to blow nose," were statistically significant (P < .05). Additionally, other four SNOT-22 parameters were statistically significant (ie, "difficulty falling asleep," "waking up at night," "frustrated/restless/irritable," "sad"). While potentially related to nasal allergy, these feelings and sleep disruption could be also related to the historic moment that we are all experiencing. 8 Also, in regard to the treatment, results about COVID-19 lockdown were worse than those of 2019 (Table 2) , although only two investigated drugs, such as "systemic antihistamines" and "nasal decongestants," reported statistical significance (P < .05). This is consistent with the symptoms and the behavior that patients with perennial allergic rhinitis usually have; in fact, most of the time, the chronic nasal obstruction is the expression of the "minimal persistent inflammation," 9 which is very often responsible for the use of nasal decongestants. These findings may suggest that being quarantined at home for weeks increased the exposure to dust mites in our study group. 10 Additionally, being the hospitals active only for urgency/emergencies, we cannot exclude the exacerbation of other preexisting comorbidities or the onset of new ones. Unfortunately the restrictions imposed by the lockdown, although necessary to fight the pandemic, were not in accordance with the indications suggested by the ARIA (Allergic Rhinitis and its Impact on Asthma) guidelines, 11,12 according to which avoiding contact with the allergen in both indoor and outdoor environments is the most effective primary preventive measure in patients with respiratory allergies. When this is not possible as at present, a "perfect storm" occurs and the role of "counseling" through phone calls and telehealth is crucial. Our results evidenced the necessity of an integrated strategy, which includes environmental cleanup and therapeutic plans according to the international guidelines. Lastly, allergen-specific immunotherapy, when clinically indicated, remains now the only treatment, which is able to change the natural history of allergic diseases. However, in many localities worldwide, most The authors disclose no conflicts of interest This research was conducted in accordance with Guideline for Good Clinical Practice and the ethical principles originating in the Declaration of Helsinki. Informed consent to participate in this study was obtained by all participants. Data are available upon request to the corresponding author at any time.
The presence and diffusion of bioaerosols (bacteria, viruses, fungi, and other dead or living organisms including biological debris) in the Earth atmosphere impact ecosystems, climate, and human health (Burrows et al., 2009; Burrows et al., 2009; Fröhlich-nowoisky et al., 2016; Pöschl and Shiraiwa, 2015) . The biosphere directly emits bioaerosols into the atmosphere, which subsequently enables their dispersion and transport even at long distances (Després et al., 2012; Womack et al., 2010) . In the course of atmospheric transport, bioaerosols may undergo further chemical and physical transformation, stress, and biological aging upon interaction with UV radiation, photo-oxidants, and various air pollutants like acids, nitrogen oxides, ozone, and aromatic compounds. All these processes can limit or even suppress the vitality of the living fraction of bioaerosol and therefore affect their capacity to diffuse and to colonize new ecosystems (Womack et al., 2010) . Due to the above challenges, the present knowledge on the ability of viruses and bacteria to spread in the air and diffuse infections and more in general diseases is still immature and demands a wide spectrum of investigation (Middleton, 2017; Morawska and Cao, 2020; Polymenakou, 2012) . Most of the previous studies in the Mediterranean area have been limited to advections of air masses from the Sahara Desert only. The occurrence and impact of this type of air mass, very rich in desert dust, are frequent and well documented (Escudero et al., 2006; Formenti et al., circulation of air masses of different origin and distinguished by nature, type, quality, and extent of contributions (Cusack et al., 2012; Kallos et al., 2014 Kallos et al., , 2007 Petroselli et al., 2018) . Due to the very different characteristics of the source areas, these air masses are expected to carry different bacterial populations and specific chemical markers and pollutants. Moreover only a few studies have used molecular-based approaches to investigate the relationships of different air masses with the bacterial communities in the Mediterranean area. In such studies, bioaerosol characterization was conducted by a low-throughput approach (cloning and sequencing of 16S rRNA gene), while High-Throughput Sequencing (HTS) approaches were used in an even smaller number of cases. Most of the previous studies on aerosol-associated microbial communities in the Mediterranean area have been focused on intense Saharan intrusions sampled in the proximity of the dust sources (Gat et al., 2017; Katra et al., 2015; Mazar et al., 2016; Polymenakou et al., 2008) , or after a long-range transport over the Mediterranean basin (Federici et al., 2018; Rosselli et al., 2015; Sanchez De La Campa et al., 2013) . Much less is known about the specific characteristic of the bacterial communities transported by air masses from continental Europe. In this frame, the present study aims at defining the patterns of the bacterial communities of atmospheric aerosol from distinct geographic regions reaching the Mediterranean. The samples were collected during different long-range transport events towards a background monitoring site hypotheses to be tested in this work are two-fold: (i) bacterial community structure associated with long-range transported aerosol in the Central Mediterranean area is significantly different based on the air mass provenance; (ii) there is a correlation between the main aerosol chemical characteristics and the airborne bacterial communities. To test these hypotheses, we investigated the chemical and microbial datasets by cluster analysis, similarity tests, and non-metric multidimensional scaling analysis. All the aerosol samples analyzed in this work were collected at the EMEP regional background site of Monte Martano (MM) in Central Italy (42°48'19''N, 12°33'55''E) . MM has been established in a relatively undisturbed location, near a television antenna, on the ridge of a small mountain chain (1100 m asl), above the timberline and facing a completely free horizon (Moroni et al., 2015) . The site is equipped with aerosol, gaseous pollutants, and meteorological monitoring instrumentations (Moroni et al., 2015) . Due to its elevation, the low background concentrations and the 360° free horizon, the site is particularly suited for the assessment of long-range transport events of atmospheric aerosol (Federici et al., 2018; Petroselli et al., 2018a Petroselli et al., , 2018b . The importance of the site for the monitoring of Saharan dust advections was recognized The sampled filters underwent a thorough chemical characterization that included the investigation of both the inorganic and organic fractions of particulate matter. Major ion composition was determined by ion chromatography (DIONEX 2100) after 30 minutes ultra-sonication in ultrapure water (18 MΩ). The quantified analytes were: Li + , Na + , shaken for 1h at maximum speed, centrifuged for 30' at 10000 x g and then at 11500 x g for 15' at 4C° to recover bacteria (Radosevich et al., 2002) . Supernatant was discarded and DNA was J o u r n a l P r e -p r o o f Each sequence was assigned to its original sample according to its index oligos and barcodes. After sorting the sequences, the reverse read of each paired-end sequence was reverse complemented and merged with the corresponding forward read. A quality cut-off was applied in order to remove the sequences that did not contain the barcode, those with an average base quality value (Q) lower than 30 and those that did not provide a perfect match in J o u r n a l P r e -p r o o f Journal Pre-proof the overlapping part between the two paired ends. The barcode was removed and sequences were sorted into Operational Taxonomic Units (OTUs) using the UPARSE-OTU algorithm (Edgar, 2013) . The minimum identity between each OTU member sequence and the representative sequence (i.e. the sequence that showed the minimum distance to all other sequences in the OTU) was set to 97%. The taxonomic classification of each OTU was carried on with the stand-alone version of RDP Bayesian Classifier (Wang et al., 2007) , using a 50% confidence level (Claesson et al., 2010) . Chloroplast sequences were not excluded by further analyses because their abundance can provide information on PM origin. Three independent extractions, amplifications and sequencing on each sample were performed in order to test the robustness of the proposed experimental approach and the three replicates featured nearly identical OTU distribution profiles (data not shown). Cluster analysis using the Bray-Curtis similarity index was applied to the bacterial communities belonging to the different aerosol samples. Similarity test (ANOSIM) was performed to detect differences in the bacterial community structure followed by the determination of discriminating genera by means of SIMPER routine. This analysis indicates the average contribution of each genus to the similarity and dissimilarity between groups of samples. Non-metric Multidimensional Scaling (NMDS) analysis was performed using the Bray-Curtis dissimilarity matrix and the first NMDS dimension was then plotted with chemical data in order to gain information from the correlation between abiotic and biotic components of dust samples. Additionally, the chemical peculiarities of the samples based on similarities highlighted by the NMDS were interpreted by using principal component analysis (PCA). Statistical analyses and J o u r n a l P r e -p r o o f Journal Pre-proof graphical representations were carried out using the R statistical environment (Version 4.0.1 -R Core Team 2020) and ggplot2 package (Wickham, 2016) . Nine Saharan dust advections and ten long-range transports from other geographical origin have been considered in this work. The air mass origins were identified on the basis of back-trajectory (BT) analysis. The BTs for the identified provenance groups are summarized in Figure 1 , for the 500 m endpoint. The other endpoints (50 and 1000 m above the ground) provided similar results and have been included in supplementary material ( Figure SM3 ). Saharan dust advection samples have been marked with the code SH. As for the other provenances, three main macroareas have been identified, namely regional (RG), North-western (NW) and North-eastern (NE). RG air masses have been defined as those remaining over the terrestrial and marine sectors of central Italy for at least 48 h before sampling. Table 1 summarizes all the PM samples collected during 2014 and 2015. As a general trend, the Saharan dust samples are characterized by higher aerosol mass concentrations with respect to the non-Saharan advections (see Table 2 ), i.e. an average +68.4% for PM 10 and +85.3% for PM coarse , defined as PM 10 -PM 2.5 , and lower PM 2.5 /PM 10 ratio, reading 0.52±0.18 for SH and 0.76±0.12 for non-SH samples. The increase in the concentration of the coarse fraction is typical of natural crustal aerosol sources such as desert dust (Formenti et al., 2011) . Moreover, Ca and Fe, typical crustal markers resulted higher in Saharan dust on average ( Table 2 ). The insoluble fraction of Ca, defined as Ca tot -Ca 2+ , was close to 60% for Saharan dust, J o u r n a l P r e -p r o o f Journal Pre-proof slightly lower for RG air masses and much lower for the NW. This is consistent with both the source area mineralogy and the different atmospheric processes during the long-range transport (Avila et al., 2007) . Biomass burning markers such as ammonium and organic carbon (OC) were higher in non-Saharan samples, and particularly enriched in NE samples, possibly due to the frequent wildfires recorded in Eastern Europe regions. The latter have been found to exert a distinct impact on the Monte Martano site, as previously reported in . The average OC and EC values are in agreement with those reported in (Sandrini et al., 2014) at MM for the year 2009. Total PAHs were on average the highest for RG followed by SH and NE and NW air masses. Benzo(a)Pyrene, the reference PAH for health effects, has the same order in abundance. Sequencing of 16S rRNA gene fragments led to the recovery of 1286659 high-quality sequences, which clustered, across all samples, into a total of 10513 operational taxonomic units (OTUs) calculated at 97% of sequence similarity. The average number of OTUs per sample was 2239. Although a considerable fraction of the total biodiversity (18.6% on average) could not be classified at genus level, a total of 879 different genera were identified across all samples. Among them, a total number of 116 genera were found whose relative abundance was higher than 0.5% in at least one sample. These genera were considered abundant (abundant genera hereafter) and further analyzed. 32 bacterial genera manifested a relative abundance higher than 0.5% on average in all samples; they are shown in Figure 2 . Overall, the most abundant genera were Sphingomonas (8.47%), followed by Acidovorax (3.89%), Acinetobacter (3.33%), Methylobacterium is known to be resistant to desiccation and to γ radiation together with Arthrobacter, also abundant in our samples (Favet et al., 2013) . Microvirga was already found in desert-coming air-masses and some species of this genus can reduce nitrogen gas to ammonia (Favet et al., 2013; González-Toril et al., 2020) . It should be also noted that some of the 32 most abundant genera are human-and animal-associated bacteria and include known pathogens, such as Haemophilus, Staphylococcus, Streptococcus and Propionibacterium (Brock et al., 2012) . Moreover, we retrieved some bacterial genera that, despite being ubiquitous in the environment, also contain many opportunistic pathogens and a few pathogens, such as the Pseudomonadales Acinetobacter and Pseudomonas, or Clostridium sensu stricto and Clostridium XI (Brown, 2014) . However, analyses based on 16S rRNA sequences do not allow to distinguish pathogenic from non-pathogenic species or strains. The results of the cluster analysis on the database containing only the abundant genera, summarized by the dendrogram in the right panel of Figure 2 , were used for a data-driven visualization of the samples that are reported in the barplot following the dendrogram order. The average 1-D distances reported in the dendrogram revealed a high β-diversity among the J o u r n a l P r e -p r o o f Journal Pre-proof bacterial communities of the SH samples, which however generally clustered together. At genus level, the structure of bacterial communities clearly showed differences due to the sample provenance rather than to other factors such as seasonality. Interestingly, amongst the PM samples, the non-Saharan samples collected during regional movements of air masses (RG) and, to a lesser extent, during long-range intrusions (NE and NW), showed a high richness of genera with low abundance, indicating a highly diverse and even community. Conversely, PM during Saharan intrusions showed a lower richness of genera, indicating that these microbial communities were dominated by fewer typical phylotypes. This is in contrast with previous studies, which generally reported higher diversity during dust intrusion events compared to nondust events (González-Toril et al., 2020; Griffin, 2007; Mazar et al., 2016; Polymenakou et al., 2008; Sanchez De La Campa et al., 2013) . It may be hypothesized that, in the case of central Italy, both regional air masses and long-range intrusions from NE and NW mainly cross more heterogenous areas than Saharan intrusions, thus collecting a wider variety of microorganisms. Particularly, when BTs indicated that regional air masses were prevalent, it is also possible that local sources played a major role in shaping bacterial communities. In fact, a wide variety of potential local sources, such as soil surface, leaf surface, water bodies and even animal faeces, To gain further insights about similarities and differences among all the aerosol samples and hypothesize possible effects of the air masses with different origins, a non-metric multidimensional scaling (NMDS) analysis was performed using the computation of Bray-Curtis distances between bacterial communities. This statistical method has been applied to the database containing only the abundant genera, and results are shown in Figure 3 . The results of J o u r n a l P r e -p r o o f NMDS analysis showed a good clustering (stress value < 0.15) of the samples according to their provenance group. In particular, the NMDS1 dimension (x-axis, Figure 3 ) seems to separate well the different clusters, and particularly the Saharan dust samples from the others, while NMDS2 describes the variability within each group. NMDS is a helpful exploratory analysis but does not allow explaining the similarities or dissimilarities among samples, and additional information from other analysis is needed. For example, samples associated with RG and NW air masses show a partial overlap in the NMDS analysis, which is understandable based on the phenomenology of back trajectories (see Figure 1 ). In fact, RG air masses at MM tend to the terrestrial and marine western sectors of peninsular although it has been demonstrated that a number of diseases are linked to desert aerosols (Middleton, 2017) , the concern about Saharan intrusions might be reduced from a public health point of view in this context, since air masses from European and regional origin were more enriched in human-associated bacteria than Saharan air masses. A more detailed boxplot representation of the distribution of the 116 more abundant genera for the four air mass origins is reported in Supplementary Material (figure SM5 ). Chemical and microbiological data were combined to check possible correlations between the variables and the sample provenances. In particular, some typical markers of Saharan dust, biomass burning, and industrial activities, the two latter being particularly enriched in non-Saharan samples, were identified amongst the chemical analytes. Moreover, the analysis of the β diversity showed that the microbial communities of long-range transported Saharan dust were significantly different from those sampled when other air masses were present, strongly supporting the hypothesis that desert dust can impact the bacterial composition of the aerosol at our latitudes (Gat et al., 2017; Mazar et al., 2016; Rosselli et al., 2015) . On the contrary, non-Saharan samples showed similar communities among each other, which in fact clustered together (Figures 2 and 3) . Nevertheless, as observed for the chemical characteristics, even if similarities existed within the PM samples sharing the same origin, the differences were not negligible and suggested that each event was independent of the others. This has been already observed in previous works, at least for dust events. In fact, significant differences in bacterial community J o u r n a l P r e -p r o o f Journal Pre-proof structure were reported during different dust events that impacted the same area, even when two events were very close in time (Federici et al., 2018; Yamaguchi et al., 2014) . In order to combine information about microbiology and chemistry, the NMDS1 dimension from the statistical analysis on bacterial communities was correlated with the concentration of chemical variables, normalized against the PM values (w/w). Some of the statistically significant correlations (p = ***) are shown in Figure 5 . Specifically, a significant correlation was found between NMDS1 and PM 2.5 (i.e. PM 2.5 /PM 10 ratio). PM ratio was lower for Saharan dust with the exception of the outlier SH_20141107 which corresponded to the weakest Saharan dust event, with a PM 10 concentration of 8.4 µg/m 3 . The correlation was significant also between NMDS1 and PM coarse ., which was higher for SH because Saharan intrusions are constituted by coarser particles. Organic carbon (OC) content correlated significantly with NMDS1. OC in Saharan dust was lower than in non-Saharan samples because the latter can have a higher anthropogenic contribution. The highest OC/PM 10 values were found for SH_20140624 (which had also high sulphate concentration) and SH_20140522 of NW provenance. As stated above, also many bacterial genera that were significantly more abundant in non-SH than in SH samples (e.g. Lactobacillus, Streptococcus, Propionibacterium and Haemophilus) were generally related to anthropic and built environments. This confirms the relevance of the impact that densely populated areas may exert on bacterial populations transported by air masses. Anthracene was the only PAH correlating significantly with NMDS1, being higher for SH samples. The sum of low molecular weight PAHs (LW) was also higher for SH samples. Calcium concentrations showed no correlation with NMDS1, which was interpreted as due to the high local contributions of this element. Iron, on the other side, was richer in SH samples and J o u r n a l P r e -p r o o f Journal Pre-proof correlated negatively with NMDS1. Ammonium and sulphate concentrations were generally higher for non-Saharan air masses. Innocente et al. (Innocente et al., 2017) also reported that high ammonium and sulphate concentrations were associated with long-range transport from north-west in Milan (North Italy), and those air masses presented a high percentage of Propionibacterium. This is in agreement with our SIMPER analysis, which indicated the genus Propionibacterium as significantly more abundant in NW than in SH samples ( Figure 4 ). However, Innocente and colleagues also reported that this correlation was weak, and ionic composition of air masses was much more clearly related to air mass provenience than to bacterial community structure. Indeed, also in our case, since NMDS1 was strongly correlated to air mass origin, it was not possible to fully understand whether variations in chemical variables were more correlated to variations in bacterial community structure or to dust provenance. In this work, we have characterized the bacterial communities of 19 air masses of different origin, sampled as PM 10 at the remote site of Monte Martano, in Central Italy. This EMEP station is representative of the Central Mediterranean area. The main results of the present work can be summarized as follows:  Four distinctive air masses were identified: previous similar work on this topic was substantially limited to Saharan (SH) dust air masses while in the present study we extended the characterization to regional (RG), North-Western (NW), and North-Eastern (NE) air masses.  At genus level, the distribution of the bacterial populations in air masses clearly showed differences due to the sample provenance. In fact, PM 10 during Saharan intrusions J o u r n a l P r e -p r o o f Journal Pre-proof showed a relatively low number of genera, while non-Saharan samples, particularly those collected during regional movements of air masses (RG), showed a high number of different genera with low abundance, indicating a highly diverse and even community. Cluster analysis was performed on the 116 genera whose abundance was higher than 0.5% in at least one sample (abundant genera). Barplots represent only the 32 bacterial genera that showed a relative abundance higher than 0.5%. Chloroplasts were also included in the analysis. J o u r n a l P r e -p r o o f Table 1 . Sample characteristics in terms of provenance and aerosol mass concentration in the PM 10 and coarse (PM 10 -PM 2.5 ) fractions. Provenance classification is based on BTs analysis (see Figure S1 in the Supporting Information).
Transporter 1) -the late endosomal-lysosomal receptor protein (2). Proteolytic 67 processing is also required for severe acute respiratory syndrome coronavirus (SARS-68 CoV) (3, 4), and for the current pandemic SARS-CoV-2 (5). Lassa fever virus (LASV) 69 uses a different mechanism, binding alpha-dystroglycan at the plasma membrane (6), 70 for internalization with a subsequent pH-regulated switch that leads to engagement of 71 lysosomal associated membrane protein 1 (LAMP1) for membrane fusion (7). 72 Lymphocytic choriomeningitis virus (LCMV) also uses alpha-dystroglycan (6) and is 73 internalized in a manner that depends on endosomal sorting complexes required for also interfered with VSV-MeGFP-LCMV and VSV-MeGFP-ZEBOV infection (Fig. 1C) . 143 All of these viruses require low pH to trigger viral membrane fusion with the endosomal 144 membranes, and as expected, infection was fully blocked by Bafilomycin A1, which 145 inhibits the vacuolar type H + -ATPase (V-ATPase) acidification activity (Fig. 1C) . Using live-cell spinning disk confocal microscopy ( Fig. 3, 4) , we monitored the presence Table I; primers used for 525 screening are listed in Table II .